public class Analyst extends Destroyable
Destroyable.destroy()
is invoked, all transient data returned by the Analyst gets freed and cannot be used anymore.Modifier and Type | Method and Description |
---|---|
<V> EdgeProperty<java.lang.Double> |
adamicAdarCounting(PgxGraph graph)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
|
<V> EdgeProperty<java.lang.Double> |
adamicAdarCounting(PgxGraph graph,
EdgeProperty<java.lang.Double> aa)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
|
PgxFuture<EdgeProperty<java.lang.Double>> |
adamicAdarCountingAsync(PgxGraph graph)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
|
PgxFuture<EdgeProperty<java.lang.Double>> |
adamicAdarCountingAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> aa)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
|
<ID> org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>> |
allReachableVerticesEdges(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
|
<ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>>> |
allReachableVerticesEdgesAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
|
<ID> org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>> |
allReachableVerticesEdgesFiltered(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k,
EdgeFilter filter)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
|
<ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>>> |
allReachableVerticesEdgesFilteredAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k,
EdgeFilter filter)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
|
<ID> VertexProperty<ID,java.lang.Double> |
approximateVertexBetweennessCentrality(PgxGraph graph,
int k)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> VertexProperty<ID,java.lang.Double> |
approximateVertexBetweennessCentrality(PgxGraph graph,
int k,
VertexProperty<ID,java.lang.Double> bc)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
approximateVertexBetweennessCentralityAsync(PgxGraph graph,
int k)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
approximateVertexBetweennessCentralityAsync(PgxGraph graph,
int k,
VertexProperty<ID,java.lang.Double> bc)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> VertexProperty<ID,java.lang.Double> |
approximateVertexBetweennessCentralityFromSeeds(PgxGraph graph,
PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> VertexProperty<ID,java.lang.Double> |
approximateVertexBetweennessCentralityFromSeeds(PgxGraph graph,
VertexProperty<ID,java.lang.Double> bc,
PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
approximateVertexBetweennessCentralityFromSeedsAsync(PgxGraph graph,
PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
approximateVertexBetweennessCentralityFromSeedsAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> bc,
PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
|
<ID> VertexProperty<ID,java.lang.Boolean> |
bipartiteCheck(PgxGraph graph,
VertexProperty<ID,java.lang.Boolean> isLeft)
Bipartite check verifies whether are graph is a bipartite graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Boolean>> |
bipartiteCheckAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Boolean> isLeft)
Bipartite check verifies whether are graph is a bipartite graph.
|
<ID> VertexSet<ID> |
center(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> VertexSet<ID> |
center(PgxGraph graph,
VertexSet<ID> center)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> PgxFuture<VertexSet<ID>> |
centerAsync(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> PgxFuture<VertexSet<ID>> |
centerAsync(PgxGraph graph,
VertexSet<ID> center)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> VertexProperty<ID,java.lang.Double> |
closenessCentralityDoubleLength(PgxGraph graph,
EdgeProperty<java.lang.Double> cost)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
closenessCentralityDoubleLength(PgxGraph graph,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
closenessCentralityDoubleLengthAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> cost)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
closenessCentralityDoubleLengthAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
closenessCentralityUnitLength(PgxGraph graph)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
closenessCentralityUnitLength(PgxGraph graph,
VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
closenessCentralityUnitLengthAsync(PgxGraph graph)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
closenessCentralityUnitLengthAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
|
<ID> Partition<ID> |
communitiesConductanceMinimization(PgxGraph graph)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> Partition<ID> |
communitiesConductanceMinimization(PgxGraph graph,
int maxIterations)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> Partition<ID> |
communitiesConductanceMinimization(PgxGraph graph,
int maxIterations,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> Partition<ID> |
communitiesConductanceMinimization(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> PgxFuture<Partition<ID>> |
communitiesConductanceMinimizationAsync(PgxGraph graph)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> PgxFuture<Partition<ID>> |
communitiesConductanceMinimizationAsync(PgxGraph graph,
int max)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> PgxFuture<Partition<ID>> |
communitiesConductanceMinimizationAsync(PgxGraph graph,
int maxIterations,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> PgxFuture<Partition<ID>> |
communitiesConductanceMinimizationAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
|
<ID> Partition<ID> |
communitiesInfomap(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight)
Infomap can find high quality communities in a graph.
|
<ID> Partition<ID> |
communitiesInfomap(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
double tau,
double tol,
int maxIter)
Infomap can find high quality communities in a graph.
|
<ID> Partition<ID> |
communitiesInfomap(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
double tau,
double tol,
int maxIter,
VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
|
<ID> Partition<ID> |
communitiesInfomap(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
|
<ID> PgxFuture<Partition<ID>> |
communitiesInfomapAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight)
Infomap can find high quality communities in a graph.
|
<ID> PgxFuture<Partition<ID>> |
communitiesInfomapAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
double tau,
double tol,
int maxIter)
Infomap can find high quality communities in a graph.
|
<ID> PgxFuture<Partition<ID>> |
communitiesInfomapAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
double tau,
double tol,
int maxIter,
VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
|
<ID> PgxFuture<Partition<ID>> |
communitiesInfomapAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
|
<ID> Partition<ID> |
communitiesLabelPropagation(PgxGraph graph)
Label propagation can find communities in a graph relatively fast
|
<ID> Partition<ID> |
communitiesLabelPropagation(PgxGraph graph,
int maxIterations)
Label propagation can find communities in a graph relatively fast
|
<ID> Partition<ID> |
communitiesLabelPropagation(PgxGraph graph,
int maxIterations,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Label propagation can find communities in a graph relatively fast
|
<ID> Partition<ID> |
communitiesLabelPropagation(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitonDistribution)
Label propagation can find communities in a graph relatively fast
|
<ID> PgxFuture<Partition<ID>> |
communitiesLabelPropagationAsync(PgxGraph graph)
Label propagation can find communities in a graph relatively fast
|
<ID> PgxFuture<Partition<ID>> |
communitiesLabelPropagationAsync(PgxGraph graph,
int maxIterations)
Label propagation can find communities in a graph relatively fast
|
<ID> PgxFuture<Partition<ID>> |
communitiesLabelPropagationAsync(PgxGraph graph,
int maxIterations,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Label propagation can find communities in a graph relatively fast
|
<ID> PgxFuture<Partition<ID>> |
communitiesLabelPropagationAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Label propagation can find communities in a graph relatively fast
|
<ID> Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>> |
computeHighDegreeVertices(PgxGraph graph,
int k)
Computes the k vertices with the highest degrees in the graph.
|
<ID> Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>> |
computeHighDegreeVertices(PgxGraph graph,
int k,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices)
Computes the k vertices with the highest degrees in the graph.
|
<ID> PgxFuture<Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>>> |
computeHighDegreeVerticesAsync(PgxGraph graph,
int k)
Computes the k vertices with the highest degrees in the graph.
|
<ID> PgxFuture<Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>>> |
computeHighDegreeVerticesAsync(PgxGraph graph,
int k,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices)
Computes the k vertices with the highest degrees in the graph.
|
<ID> Scalar<java.lang.Double> |
conductance(PgxGraph graph,
Partition<ID> partition,
long partitionIndex)
Conductance assesses the quality of a partition in a graph
|
<ID> Scalar<java.lang.Double> |
conductance(PgxGraph graph,
Partition<ID> partition,
long partitionIndex,
Scalar<java.lang.Double> conductance)
Conductance assesses the quality of a partition in a graph
|
<ID> PgxFuture<Scalar<java.lang.Double>> |
conductanceAsync(PgxGraph graph,
Partition<ID> partition,
long partitionIndex)
Conductance assesses the quality of a partition in a graph
|
<ID> PgxFuture<Scalar<java.lang.Double>> |
conductanceAsync(PgxGraph graph,
Partition<ID> partition,
long partitionIndex,
Scalar<java.lang.Double> conductance)
Conductance assesses the quality of a partition in a graph
|
long |
countTriangles(PgxGraph graph,
boolean sortVerticesByDegree)
triangle counting gives an overview of the amount of connections between vertices in neighborhoods
|
PgxFuture<java.lang.Long> |
countTrianglesAsync(PgxGraph graph,
boolean sortVerticesByDegree)
triangle counting gives an overview of the amount of connections between vertices in neighborhoods
|
<ID> VertexProperty<ID,PgxVect<java.lang.Integer>> |
createDistanceIndex(PgxGraph graph,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices)
Computes an index with distances to each high-degree vertex.
|
<ID> VertexProperty<ID,PgxVect<java.lang.Integer>> |
createDistanceIndex(PgxGraph graph,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes an index with distances to each high-degree vertex.
|
<ID> PgxFuture<VertexProperty<ID,PgxVect<java.lang.Integer>>> |
createDistanceIndexAsync(PgxGraph graph,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices)
Computes an index with distances to each high-degree vertex.
|
<ID> PgxFuture<VertexProperty<ID,PgxVect<java.lang.Integer>>> |
createDistanceIndexAsync(PgxGraph graph,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes an index with distances to each high-degree vertex.
|
oracle.pgx.api.beta.mllib.DeepWalkModelBuilder |
deepWalkModelBuilder()
Builder for Deepwalk model
|
<ID> VertexProperty<ID,java.lang.Integer> |
degreeCentrality(PgxGraph graph)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> VertexProperty<ID,java.lang.Integer> |
degreeCentrality(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
degreeCentralityAsync(PgxGraph graph)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
degreeCentralityAsync(PgxGraph graph,
java.lang.String propertyName) |
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
degreeCentralityAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> |
diameter(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
|
<ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> |
diameter(PgxGraph graph,
Scalar<java.lang.Integer> diameter,
VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> |
diameterAsync(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> |
diameterAsync(PgxGraph graph,
Scalar<java.lang.Integer> diameter,
VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
|
<ID> VertexProperty<ID,java.lang.Double> |
eigenvectorCentrality(PgxGraph graph)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
eigenvectorCentrality(PgxGraph graph,
int max,
double maxDiff,
boolean useL2Norm,
boolean useInEdge)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
eigenvectorCentrality(PgxGraph graph,
int max,
double maxDiff,
boolean useL2Norm,
boolean useInEdge,
VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> VertexProperty<ID,java.lang.Double> |
eigenvectorCentrality(PgxGraph graph,
VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
eigenvectorCentralityAsync(PgxGraph graph)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
eigenvectorCentralityAsync(PgxGraph graph,
int max,
double maxDiff,
boolean useL2Norm,
boolean useInEdge)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
eigenvectorCentralityAsync(PgxGraph graph,
int max,
double maxDiff,
boolean useL2Norm,
boolean useInEdge,
VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
eigenvectorCentralityAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
|
<ID> org.apache.commons.lang3.tuple.Triple<ScalarSequence<java.lang.Integer>,VertexSequence<ID>,EdgeSequence> |
enumerateSimplePaths(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k,
VertexSet verticesOnPath,
EdgeSet edgesOnPath,
PgxMap<PgxVertex<ID>,java.lang.Integer> dist)
Enumerate all simple paths between the source and destination vertex
|
<ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<ScalarSequence<java.lang.Integer>,VertexSequence<ID>,EdgeSequence>> |
enumerateSimplePathsAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int k,
VertexSet verticesOnPath,
EdgeSet edgesOnPath,
PgxMap<PgxVertex<ID>,java.lang.Integer> dist)
Enumerate all simple paths between the source and destination vertex
|
<ID> AllPaths<ID> |
fattestPath(PgxGraph graph,
ID rootId,
EdgeProperty<java.lang.Double> capacity)
Convenience wrapper around
fattestPath(PgxGraph, PgxVertex, EdgeProperty) taking a vertex ID instead of a
PgxVertex . |
<ID> AllPaths<ID> |
fattestPath(PgxGraph graph,
ID rootId,
EdgeProperty<java.lang.Double> capacity,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
fattestPath(PgxGraph, PgxVertex, EdgeProperty, VertexProperty,
VertexProperty, VertexProperty) taking a vertex ID instead of a
PgxVertex . |
<ID> AllPaths<ID> |
fattestPath(PgxGraph graph,
PgxVertex<ID> root,
EdgeProperty<java.lang.Double> capacity)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
|
<ID> AllPaths<ID> |
fattestPath(PgxGraph graph,
PgxVertex<ID> root,
EdgeProperty<java.lang.Double> capacity,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
|
<ID> PgxFuture<AllPaths<ID>> |
fattestPathAsync(PgxGraph graph,
PgxVertex<ID> root,
EdgeProperty<java.lang.Double> capacity)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
|
<ID> PgxFuture<AllPaths<ID>> |
fattestPathAsync(PgxGraph graph,
PgxVertex<ID> root,
EdgeProperty<java.lang.Double> capacity,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
int maxDepth)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
int maxDepth)
Convenience wrapper around
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredBfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredBfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
int maxDepth)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
ID root,
VertexFilter filter,
VertexFilter navigator,
int maxDepth)
Convenience wrapper around
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, int)
taking a vertex ID instead of PgxVertex . |
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> |
filteredDfs(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
int maxDepth,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
boolean initWithInf,
VertexProperty<ID,java.lang.Integer> distance,
VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> |
filteredDfsAsync(PgxGraph graph,
PgxVertex<ID> root,
VertexFilter navigator,
int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
|
<ID> PgxPath<ID> |
findCycle(PgxGraph graph)
Find cycle looks for any loop in the graph.
|
<ID> PgxPath<ID> |
findCycle(PgxGraph graph,
PgxVertex<ID> src)
Find cycle looks for any loop in the graph.
|
<ID> PgxPath<ID> |
findCycle(PgxGraph graph,
PgxVertex<ID> src,
VertexSequence<ID> nodeSeq,
EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
|
<ID> PgxPath<ID> |
findCycle(PgxGraph graph,
VertexSequence<ID> nodeSeq,
EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
|
<ID> PgxFuture<PgxPath<ID>> |
findCycleAsync(PgxGraph graph)
Find cycle looks for any loop in the graph.
|
<ID> PgxFuture<PgxPath<ID>> |
findCycleAsync(PgxGraph graph,
PgxVertex<ID> src)
Find cycle looks for any loop in the graph.
|
<ID> PgxFuture<PgxPath<ID>> |
findCycleAsync(PgxGraph graph,
PgxVertex<ID> src,
VertexSequence<ID> nodeSeq,
EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
|
<ID> PgxFuture<PgxPath<ID>> |
findCycleAsync(PgxGraph graph,
VertexSequence<ID> nodeSeq,
EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
|
PgxSession |
getSession()
Gets the session.
|
oracle.pgx.api.beta.mllib.GraphWiseConvLayerConfigBuilder |
graphWiseConvLayerConfigBuilder()
Return a SupervisedGraphWiseLayerConfigBuilder used to create a GraphWiseLayerConfig
|
oracle.pgx.api.beta.mllib.GraphWisePredictionLayerConfigBuilder |
graphWisePredictionLayerConfigBuilder()
Return a GraphWisePredictionLayerConfigBuilder used to create a GraphWiseLayerConfig
|
<ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> |
hits(PgxGraph graph)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> |
hits(PgxGraph graph,
int max)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> |
hits(PgxGraph graph,
int max,
VertexProperty<ID,java.lang.Double> auth,
VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> |
hits(PgxGraph graph,
VertexProperty<ID,java.lang.Double> auth,
VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> |
hitsAsync(PgxGraph graph)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> |
hitsAsync(PgxGraph graph,
int max)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> |
hitsAsync(PgxGraph graph,
int max,
VertexProperty<ID,java.lang.Double> auth,
VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> |
hitsAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> auth,
VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
|
<ID> VertexProperty<ID,java.lang.Integer> |
inDegreeCentrality(PgxGraph graph)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> VertexProperty<ID,java.lang.Integer> |
inDegreeCentrality(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
inDegreeCentralityAsync(PgxGraph graph)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
inDegreeCentralityAsync(PgxGraph graph,
java.lang.String propertyName) |
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
inDegreeCentralityAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
PgxMap<java.lang.Integer,java.lang.Long> |
inDegreeDistribution(PgxGraph graph)
In-degree distribution gives information about the incoming flows in a graph
|
PgxMap<java.lang.Integer,java.lang.Long> |
inDegreeDistribution(PgxGraph graph,
PgxMap<java.lang.Integer,java.lang.Long> distribution)
In-degree distribution gives information about the incoming flows in a graph
|
PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> |
inDegreeDistributionAsync(PgxGraph graph)
In-degree distribution gives information about the incoming flows in a graph
|
PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> |
inDegreeDistributionAsync(PgxGraph graph,
PgxMap<java.lang.Integer,java.lang.Long> distribution)
In-degree distribution gives information about the incoming flows in a graph
|
<ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> |
kcore(PgxGraph graph)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> |
kcore(PgxGraph graph,
int minCore,
int maxCore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> |
kcore(PgxGraph graph,
int minCore,
int maxCore,
Scalar<java.lang.Long> maxKCore,
VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> |
kcore(PgxGraph graph,
Scalar<java.lang.Long> maxKCore,
VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> |
kcoreAsync(PgxGraph graph)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> |
kcoreAsync(PgxGraph graph,
int minCore,
int maxCore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> |
kcoreAsync(PgxGraph graph,
int minCore,
int maxCore,
Scalar<java.lang.Long> maxKCore,
VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> |
kcoreAsync(PgxGraph graph,
Scalar<java.lang.Long> maxKCore,
VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
|
<ID> Pair<VertexSequence<ID>,EdgeSequence> |
limitedShortestPathHopDist(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes the k-hop limited shortest path between two vertices.
|
<ID> Pair<VertexSequence<ID>,EdgeSequence> |
limitedShortestPathHopDist(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
VertexSequence<ID> pathVertices,
EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> |
limitedShortestPathHopDistAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes the k-hop limited shortest path between two vertices.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> |
limitedShortestPathHopDistAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
VertexSequence<ID> pathVertices,
EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
|
<ID> Pair<VertexSequence<ID>,EdgeSequence> |
limitedShortestPathHopDistFiltered(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
EdgeFilter filter)
Computes the k-hop limited shortest path between two vertices.
|
<ID> Pair<VertexSequence<ID>,EdgeSequence> |
limitedShortestPathHopDistFiltered(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
EdgeFilter filter,
VertexSequence<ID> pathVertices,
EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> |
limitedShortestPathHopDistFilteredAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
EdgeFilter filter)
Computes the k-hop limited shortest path between two vertices.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> |
limitedShortestPathHopDistFilteredAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
int maxHops,
PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping,
VertexSet<ID> highDegreeVertices,
VertexProperty<ID,PgxVect<java.lang.Integer>> index,
EdgeFilter filter,
VertexSequence<ID> pathVertices,
EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
|
oracle.pgx.api.beta.mllib.DeepWalkModel |
loadDeepWalkModel(java.lang.String path,
java.lang.String key)
Loads an encrypted Deepwalk model
|
oracle.pgx.api.beta.mllib.Pg2vecModel |
loadPg2vecModel(java.lang.String path,
java.lang.String key)
Loads an encrypted pg2vec model
|
oracle.pgx.api.beta.mllib.SupervisedGraphWiseModel |
loadSupervisedGraphWiseModel(java.lang.String path,
java.lang.String key)
Loads an encrypted GraphWise model
|
<ID> VertexProperty<ID,java.lang.Double> |
localClusteringCoefficient(PgxGraph graph)
LCC gives information about potential clustering options in a graph
|
<ID> VertexProperty<ID,java.lang.Double> |
localClusteringCoefficient(PgxGraph graph,
VertexProperty<ID,java.lang.Double> lcc)
LCC gives information about potential clustering options in a graph
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
localClusteringCoefficientAsync(PgxGraph graph)
LCC gives information about potential clustering options in a graph
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
localClusteringCoefficientAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> lcc)
LCC gives information about potential clustering options in a graph
|
<ID> VertexProperty<ID,java.lang.Long> |
louvain(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
Louvain can detect communities in a large graph relatively fast.
|
<ID> VertexProperty<ID,java.lang.Long> |
louvain(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
int maxIter)
Louvain can detect communities in a large graph relatively fast.
|
<ID> VertexProperty<ID,java.lang.Long> |
louvain(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
int maxIter,
int nbrPass,
double tol,
VertexProperty<ID,java.lang.Long> community)
Louvain can detect communities in a large graph relatively fast.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Long>> |
louvainAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
Louvain can detect communities in a large graph relatively fast.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Long>> |
louvainAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
int maxIter)
Louvain can detect communities in a large graph relatively fast.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Long>> |
louvainAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
int maxIter,
int nbrPass,
double tol,
VertexProperty<ID,java.lang.Long> community)
Louvain can detect communities in a large graph relatively fast.
|
<ID> MatrixFactorizationModel<ID> |
matrixFactorizationGradientDescent(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> MatrixFactorizationModel<ID> |
matrixFactorizationGradientDescent(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
double learningRate,
double changePerStep,
double lambda,
int maxStep,
int vectorLength)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> MatrixFactorizationModel<ID> |
matrixFactorizationGradientDescent(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
double learningRate,
double changePerStep,
double lambda,
int maxStep,
int vectorLength,
VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> MatrixFactorizationModel<ID> |
matrixFactorizationGradientDescent(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> PgxFuture<MatrixFactorizationModel<ID>> |
matrixFactorizationGradientDescentAsync(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> PgxFuture<MatrixFactorizationModel<ID>> |
matrixFactorizationGradientDescentAsync(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
double learningRate,
double changePerStep,
double lambda,
int maxStep,
int vectorLength)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> PgxFuture<MatrixFactorizationModel<ID>> |
matrixFactorizationGradientDescentAsync(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
double learningRate,
double changePerStep,
double lambda,
int maxStep,
int vectorLength,
VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> PgxFuture<MatrixFactorizationModel<ID>> |
matrixFactorizationGradientDescentAsync(BipartiteGraph graph,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
|
<ID> VertexProperty<ID,java.lang.Double> |
matrixFactorizationRecommendations(BipartiteGraph graph,
ID user,
int vectorLength,
VertexProperty<ID,PgxVect<java.lang.Double>> feature,
VertexProperty<ID,java.lang.Double> estimatedRating)
Convenience wrapper around
matrixFactorizationRecommendations(BipartiteGraph, PgxVertex, int,
VertexProperty, VertexProperty) taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
matrixFactorizationRecommendations(BipartiteGraph graph,
PgxVertex<ID> user,
int vectorLength,
VertexProperty<ID,PgxVect<java.lang.Double>> feature,
VertexProperty<ID,java.lang.Double> estimatedRating)
Estimate rating can be used as a prediction algorithm for bipartite graphs
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
matrixFactorizationRecommendationsAsync(BipartiteGraph graph,
PgxVertex<ID> user,
int vectorLength,
VertexProperty<ID,PgxVect<java.lang.Double>> feature,
VertexProperty<ID,java.lang.Double> estimatedRating)
Estimate rating can be used as a prediction algorithm for bipartite graphs
|
<ID> VertexProperty<ID,java.lang.Integer> |
outDegreeCentrality(PgxGraph graph)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> VertexProperty<ID,java.lang.Integer> |
outDegreeCentrality(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
outDegreeCentralityAsync(PgxGraph graph)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
outDegreeCentralityAsync(PgxGraph graph,
java.lang.String propertyName) |
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
outDegreeCentralityAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> dc)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
|
PgxMap<java.lang.Integer,java.lang.Long> |
outDegreeDistribution(PgxGraph graph)
Out-degree distribution gives information about the outgoing flows in a graph
|
PgxMap<java.lang.Integer,java.lang.Long> |
outDegreeDistribution(PgxGraph graph,
PgxMap<java.lang.Integer,java.lang.Long> distribution)
Out-degree distribution gives information about the outgoing flows in a graph
|
PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> |
outDegreeDistributionAsync(PgxGraph graph)
Out-degree distribution gives information about the outgoing flows in a graph
|
PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> |
outDegreeDistributionAsync(PgxGraph graph,
PgxMap<java.lang.Integer,java.lang.Long> distribution)
Out-degree distribution gives information about the outgoing flows in a graph
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
boolean norm)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
double e,
double d,
int max)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
double e,
double d,
int max,
boolean norm)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerank(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerankApproximate(PgxGraph graph)
Faster, but less accurate than pagerank.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerankApproximate(PgxGraph graph,
double e,
double d,
int max)
Faster, but less accurate than pagerank.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerankApproximate(PgxGraph graph,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank.
|
<ID> VertexProperty<ID,java.lang.Double> |
pagerankApproximate(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankApproximateAsync(PgxGraph graph)
Faster, but less accurate than pagerank.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankApproximateAsync(PgxGraph graph,
double e,
double d,
int max)
Faster, but less accurate than pagerank.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankApproximateAsync(PgxGraph graph,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankApproximateAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
boolean norm)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
double e,
double d,
int max)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
double e,
double d,
int max,
boolean norm)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
pagerankAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph.
|
<ID> Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>> |
partitionConductance(PgxGraph graph,
Partition<ID> partition)
Partition conductance assesses the quality of many partitions in a graph
|
<ID> Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>> |
partitionConductance(PgxGraph graph,
Partition<ID> partition,
Scalar<java.lang.Double> avgConductance,
Scalar<java.lang.Double> minConductance)
Partition conductance assesses the quality of many partitions in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>>> |
partitionConductanceAsync(PgxGraph graph,
Partition<ID> partition)
Partition conductance assesses the quality of many partitions in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>>> |
partitionConductanceAsync(PgxGraph graph,
Partition<ID> partition,
Scalar<java.lang.Double> avgConductance,
Scalar<java.lang.Double> minConductance)
Partition conductance assesses the quality of many partitions in a graph
|
<ID> Scalar<java.lang.Double> |
partitionModularity(PgxGraph graph,
Partition<ID> partition)
Modularity summarizes information about the quality of components in a graph
|
<ID> Scalar<java.lang.Double> |
partitionModularity(PgxGraph graph,
Partition<ID> partition,
Scalar<java.lang.Double> modularity)
Modularity summarizes information about the quality of components in a graph
|
<ID> PgxFuture<Scalar<java.lang.Double>> |
partitionModularityAsync(PgxGraph graph,
Partition<ID> partition)
Modularity summarizes information about the quality of components in a graph
|
<ID> PgxFuture<Scalar<java.lang.Double>> |
partitionModularityAsync(PgxGraph graph,
Partition<ID> partition,
Scalar<java.lang.Double> modularity)
Modularity summarizes information about the quality of components in a graph
|
<ID> PgxFuture<Scalar<java.lang.Double>> |
partitionModularityAsync(PgxGraph graph,
Partition<ID> partition,
java.lang.String modularityName) |
<ID> VertexSet<ID> |
periphery(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> VertexSet<ID> |
periphery(PgxGraph graph,
VertexSet<ID> periphery)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> PgxFuture<VertexSet<ID>> |
peripheryAsync(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> PgxFuture<VertexSet<ID>> |
peripheryAsync(PgxGraph graph,
VertexSet<ID> periphery)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max)
Convenience wrapper around
personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,)
taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
boolean norm)
Convenience wrapper around
personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
boolean) taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Convenience wrapper around
#personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, boolean,
VertexProperty taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Convenience wrapper around
#personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
VertexProperty taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
boolean norm)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
PgxVertex<ID> v,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
boolean norm)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
ID v,
java.math.BigDecimal d,
int maxIterations,
java.math.BigDecimal maxDiff)
Convenience wrapper around
personalizedSalsa(BipartiteGraph, PgxVertex, BigDecimal, int, BigDecimal)
taking a vertex ID instead of PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
ID v,
java.math.BigDecimal d,
int maxIterations,
java.math.BigDecimal maxDiff,
VertexProperty<ID,java.lang.Double> salsaRank)
Convenience wrapper around
#personalizedSalsa(BipartiteGraph, PgxVertex, BigDecimal, int, BigDecimal, VertexProperty
taking a vertex ID instead of PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
PgxVertex<ID> v)
Personalized salsa for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
PgxVertex<ID> v,
double d,
int maxIter,
double maxDiff)
Personalized salsa for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
PgxVertex<ID> v,
double d,
int maxIter,
double maxDiff,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
PgxVertex<ID> v,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
VertexSet<ID> vertices)
Personalized salsa for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
VertexSet<ID> vertices,
double d,
int maxIter,
double maxDiff)
Personalized salsa for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
VertexSet<ID> vertices,
double d,
int maxIter,
double maxDiff,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedSalsa(BipartiteGraph graph,
VertexSet<ID> vertices,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
PgxVertex<ID> v)
Personalized salsa for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
PgxVertex<ID> v,
double d,
int maxIter,
double maxDiff)
Personalized salsa for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
PgxVertex<ID> v,
double d,
int maxIter,
double maxDiff,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
PgxVertex<ID> v,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
VertexSet<ID> vertices)
Personalized salsa for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
VertexSet<ID> vertices,
double d,
int maxIter,
double maxDiff)
Personalized salsa for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
VertexSet<ID> vertices,
double d,
int maxIter,
double maxDiff,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedSalsaAsync(BipartiteGraph graph,
VertexSet<ID> vertices,
VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Convenience wrapper around
personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
boolean, EdgeProperty) taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Convenience wrapper around
#personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, boolean, EdgeProperty,
VertexProperty taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
EdgeProperty<java.lang.Double> weight)
Convenience wrapper around
personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
EdgeProperty) taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
ID vertexId,
java.math.BigDecimal e,
java.math.BigDecimal d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Convenience wrapper around
#personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, EdgeProperty,
VertexProperty taking a vertex ID instead of a PgxVertex . |
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
PgxVertex<ID> v,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
personalizedWeightedPagerank(PgxGraph graph,
VertexSet<ID> vertices,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
PgxVertex<ID> v,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
personalizedWeightedPagerankAsync(PgxGraph graph,
VertexSet<ID> vertices,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges.
|
oracle.pgx.api.beta.mllib.Pg2vecModelBuilder |
pg2vecModelBuilder()
Builds a pg2vec model
|
EdgeProperty<java.lang.Boolean> |
prim(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
prim reveals tree structures with shortest paths in a graph
|
EdgeProperty<java.lang.Boolean> |
prim(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
EdgeProperty<java.lang.Boolean> mst)
prim reveals tree structures with shortest paths in a graph
|
EdgeProperty<java.lang.Boolean> |
prim(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
java.lang.String mstName)
Convenience wrapper around
prim(PgxGraph, EdgeProperty, String) |
PgxFuture<EdgeProperty<java.lang.Boolean>> |
primAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
prim reveals tree structures with shortest paths in a graph
|
PgxFuture<EdgeProperty<java.lang.Boolean>> |
primAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
EdgeProperty<java.lang.Boolean> mst)
prim reveals tree structures with shortest paths in a graph
|
<ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> |
radius(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
|
<ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> |
radius(PgxGraph graph,
Scalar<java.lang.Integer> radius,
VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> |
radiusAsync(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
|
<ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> |
radiusAsync(PgxGraph graph,
Scalar<java.lang.Integer> radius,
VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
|
<ID> PgxMap<PgxVertex<ID>,java.lang.Integer> |
randomWalkWithRestart(PgxGraph graph,
ID source,
int length,
java.math.BigDecimal resetProb,
PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount)
Convenience wrapper around
randomWalkWithRestart(PgxGraph, PgxVertex, int, double,
PgxMap) taking a vertex ID instead of a PgxVertex . |
<ID> PgxMap<PgxVertex<ID>,java.lang.Integer> |
randomWalkWithRestart(PgxGraph graph,
PgxVertex<ID> source,
int length,
double resetProb,
PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount)
random walk with restart does the what its name says, it can find approximate stationary distributions
|
<ID> PgxFuture<PgxMap<PgxVertex<ID>,java.lang.Integer>> |
randomWalkWithRestartAsync(PgxGraph graph,
PgxVertex<ID> source,
int length,
double resetProb,
PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount)
random walk with restart does the what its name says, it can find approximate stationary distributions
|
<ID> java.lang.Integer |
reachability(PgxGraph graph,
PgxVertex<ID> source,
PgxVertex<ID> dest,
int maxHops,
boolean ignoreEdgeDirection)
Reachability is a fast way to check if two vertices are reachable from each other.
|
<ID> PgxFuture<java.lang.Integer> |
reachabilityAsync(PgxGraph graph,
PgxVertex<ID> source,
PgxVertex<ID> dest,
int maxHops,
boolean ignoreEdgeDirection)
Reachability is a fast way to check if two vertices are reachable from each other.
|
<ID> VertexProperty<ID,java.lang.Double> |
salsa(BipartiteGraph graph)
SALSA computes ranking scores.
|
<ID> VertexProperty<ID,java.lang.Double> |
salsa(BipartiteGraph graph,
double maxDiff,
int maxIter)
SALSA computes ranking scores.
|
<ID> VertexProperty<ID,java.lang.Double> |
salsa(BipartiteGraph graph,
double maxDiff,
int maxIter,
VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores.
|
<ID> VertexProperty<ID,java.lang.Double> |
salsa(BipartiteGraph graph,
VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
salsaAsync(BipartiteGraph graph)
SALSA computes ranking scores.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
salsaAsync(BipartiteGraph graph,
double maxDiff,
int maxIter)
SALSA computes ranking scores.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
salsaAsync(BipartiteGraph graph,
double maxDiff,
int maxIter,
VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
salsaAsync(BipartiteGraph graph,
VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores.
|
<ID> Partition<ID> |
sccKosaraju(PgxGraph graph)
Kosaraju finds strongly connected components in a graph
|
<ID> Partition<ID> |
sccKosaraju(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Kosaraju finds strongly connected components in a graph
|
<ID> PgxFuture<Partition<ID>> |
sccKosarajuAsync(PgxGraph graph)
Kosaraju finds strongly connected components in a graph
|
<ID> PgxFuture<Partition<ID>> |
sccKosarajuAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Kosaraju finds strongly connected components in a graph
|
<ID> Partition<ID> |
sccTarjan(PgxGraph graph)
Tarjan finds strongly connected components in a graph
|
<ID> Partition<ID> |
sccTarjan(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Tarjan finds strongly connected components in a graph
|
<ID> PgxFuture<Partition<ID>> |
sccTarjanAsync(PgxGraph graph)
Tarjan finds strongly connected components in a graph
|
<ID> PgxFuture<Partition<ID>> |
sccTarjanAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitonDistribution)
Tarjan finds strongly connected components in a graph
|
<ID> AllPaths<ID> |
shortestPathBellmanFord(PgxGraph graph,
ID srcId,
EdgeProperty<java.lang.Double> cost)
Convenience wrapper around
shortestPathBellmanFord(PgxGraph, PgxVertex, EdgeProperty) taking a vertex ID
instead of PgxVertex . |
<ID> AllPaths<ID> |
shortestPathBellmanFord(PgxGraph graph,
ID srcId,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathBellmanFord(PgxGraph, PgxVertex, EdgeProperty, VertexProperty,
VertexProperty, VertexProperty) taking a vertex ID
instead of PgxVertex . |
<ID> AllPaths<ID> |
shortestPathBellmanFord(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost)
Bellman-ford finds multiple shortest paths at the same time
|
<ID> AllPaths<ID> |
shortestPathBellmanFord(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bellman-ford finds multiple shortest paths at the same time
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathBellmanFordAsync(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost)
Bellman-ford finds multiple shortest paths at the same time
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathBellmanFordAsync(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bellman-ford finds multiple shortest paths at the same time
|
<ID> AllPaths<ID> |
shortestPathBellmanFordReverse(PgxGraph graph,
ID srcId,
EdgeProperty<java.lang.Double> cost)
Convenience wrapper around
shortestPathBellmanFordReverse(PgxGraph, PgxVertex, EdgeProperty) taking a
vertex ID instead of PgxVertex . |
<ID> AllPaths<ID> |
shortestPathBellmanFordReverse(PgxGraph graph,
ID srcId,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathBellmanFordReverse(PgxGraph, PgxVertex, EdgeProperty,
VertexProperty, VertexProperty, VertexProperty) taking a
vertex ID instead of PgxVertex . |
<ID> AllPaths<ID> |
shortestPathBellmanFordReverse(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost)
Reversed bellman-ford finds multiple shortest paths at the same time
|
<ID> AllPaths<ID> |
shortestPathBellmanFordReverse(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Reversed bellman-ford finds multiple shortest paths at the same time
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathBellmanFordReverseAsync(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost)
Reversed bellman-ford finds multiple shortest paths at the same time
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathBellmanFordReverseAsync(PgxGraph graph,
PgxVertex<ID> src,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Reversed bellman-ford finds multiple shortest paths at the same time
|
<ID> PgxPath<ID> |
shortestPathDijkstra(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost)
Convenience wrapper around
shortestPathDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty) taking
vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathDijkstra(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty) taking
vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathDijkstra(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost)
Dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxPath<ID> |
shortestPathDijkstra(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathDijkstraAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost)
Dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathDijkstraAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxPath<ID> |
shortestPathDijkstraBidirectional(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost)
Convenience wrapper around
shortestPathDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathDijkstraBidirectional(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathDijkstraBidirectional(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxPath<ID> |
shortestPathDijkstraBidirectional(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
java.lang.String parentName,
java.lang.String parentEdgeName) |
<ID> PgxFuture<PgxPath<ID>> |
shortestPathDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
|
<ID> PgxPath<ID> |
shortestPathFilteredDijkstra(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Convenience wrapper around
shortestPathFilteredDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter) taking vertex IDs
instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathFilteredDijkstra(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathFilteredDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter) taking vertex IDs
instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathFilteredDijkstra(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxPath<ID> |
shortestPathFilteredDijkstra(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathFilteredDijkstraAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathFilteredDijkstraAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxPath<ID> |
shortestPathFilteredDijkstraBidirectional(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Convenience wrapper around
shortestPathFilteredDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathFilteredDijkstraBidirectional(PgxGraph graph,
ID srcId,
ID dstId,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathFilteredDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs instead of PgxVertex . |
<ID> PgxPath<ID> |
shortestPathFilteredDijkstraBidirectional(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxPath<ID> |
shortestPathFilteredDijkstraBidirectional(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> PgxFuture<PgxPath<ID>> |
shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
java.lang.String parentName,
java.lang.String parentEdgeName) |
<ID> PgxFuture<PgxPath<ID>> |
shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph,
PgxVertex<ID> src,
PgxVertex<ID> dst,
EdgeProperty<java.lang.Double> cost,
GraphFilter filterExpr,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
|
<ID> AllPaths<ID> |
shortestPathHopDist(PgxGraph graph,
ID srcId)
Convenience wrapper around
shortestPathHopDist(PgxGraph, PgxVertex) taking a vertex ID instead of
PgxVertex . |
<ID> AllPaths<ID> |
shortestPathHopDist(PgxGraph graph,
ID srcId,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathHopDist(PgxGraph, PgxVertex, VertexProperty, VertexProperty,
VertexProperty) taking a vertex ID instead of
PgxVertex . |
<ID> AllPaths<ID> |
shortestPathHopDist(PgxGraph graph,
PgxVertex<ID> src)
Hop distance can give a relatively fast insight on the distances in a graph
|
<ID> AllPaths<ID> |
shortestPathHopDist(PgxGraph graph,
PgxVertex<ID> src,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Hop distance can give a relatively fast insight on the distances in a graph
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathHopDistAsync(PgxGraph graph,
PgxVertex<ID> src)
Hop distance can give a relatively fast insight on the distances in a graph
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathHopDistAsync(PgxGraph graph,
PgxVertex<ID> src,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Hop distance can give a relatively fast insight on the distances in a graph
|
<ID> AllPaths<ID> |
shortestPathHopDistReverse(PgxGraph graph,
ID srcId)
Convenience wrapper around
shortestPathHopDistReverse(PgxGraph, PgxVertex) taking a vertex ID instead of
PgxVertex . |
<ID> AllPaths<ID> |
shortestPathHopDistReverse(PgxGraph graph,
ID srcId,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Convenience wrapper around
shortestPathHopDistReverse(PgxGraph, PgxVertex, VertexProperty, VertexProperty,
VertexProperty) taking a vertex ID instead of
PgxVertex . |
<ID> AllPaths<ID> |
shortestPathHopDistReverse(PgxGraph graph,
PgxVertex<ID> src)
Backwards hop distance can give a relatively fast insight on the distances in a graph
|
<ID> AllPaths<ID> |
shortestPathHopDistReverse(PgxGraph graph,
PgxVertex<ID> src,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Backwards hop distance can give a relatively fast insight on the distances in a graph
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathHopDistReverseAsync(PgxGraph graph,
PgxVertex<ID> src)
Backwards hop distance can give a relatively fast insight on the distances in a graph
|
<ID> PgxFuture<AllPaths<ID>> |
shortestPathHopDistReverseAsync(PgxGraph graph,
PgxVertex<ID> src,
VertexProperty<ID,java.lang.Double> distance,
VertexProperty<ID,PgxVertex<ID>> parent,
VertexProperty<ID,PgxEdge> parentEdge)
Backwards hop distance can give a relatively fast insight on the distances in a graph
|
oracle.pgx.api.beta.mllib.SupervisedGraphWiseModelBuilder |
supervisedGraphWiseModelBuilder()
Return a SupervisedGraphWise model builder that can be used to set the configuration of the model and then create
it.
|
<ID> VertexProperty<ID,java.lang.Integer> |
topologicalSchedule(PgxGraph graph,
VertexSet<ID> source)
Topological schedule gives an order of visit for the reachable vertices from the source
|
<ID> VertexProperty<ID,java.lang.Integer> |
topologicalSchedule(PgxGraph graph,
VertexSet<ID> source,
VertexProperty<ID,java.lang.Integer> topoSched)
Topological sort gives an order of visit for vertices in directed acyclic graphs
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
topologicalScheduleAsync(PgxGraph graph,
VertexSet<ID> source)
Topological schedule gives an order of visit for the reachable vertices from the source
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
topologicalScheduleAsync(PgxGraph graph,
VertexSet<ID> source,
VertexProperty<ID,java.lang.Integer> topoSched)
Topological schedule gives an order of visit for the reachable vertices from the source
|
<ID> VertexProperty<ID,java.lang.Integer> |
topologicalSort(PgxGraph graph)
Topological sort gives an order of visit for vertices in directed acyclic graphs
|
<ID> VertexProperty<ID,java.lang.Integer> |
topologicalSort(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> topoSort)
Topological sort gives an order of visit for vertices in directed acyclic graphs
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
topologicalSortAsync(PgxGraph graph)
Topological sort gives an order of visit for vertices in directed acyclic graphs
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> |
topologicalSortAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Integer> topoSort)
Topological sort gives an order of visit for vertices in directed acyclic graphs
|
java.lang.String |
toString() |
<ID> VertexProperty<ID,java.lang.Double> |
vertexBetweennessCentrality(PgxGraph graph)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
|
<ID> VertexProperty<ID,java.lang.Double> |
vertexBetweennessCentrality(PgxGraph graph,
VertexProperty<ID,java.lang.Double> bc)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
vertexBetweennessCentralityAsync(PgxGraph graph)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
vertexBetweennessCentralityAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Double> bc)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
|
<ID> Partition<ID> |
wcc(PgxGraph graph)
Identifying weakly connected components can be useful for clustering graph data
|
<ID> Partition<ID> |
wcc(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Identifying weakly connected components can be useful for clustering graph data
|
<ID> PgxFuture<Partition<ID>> |
wccAsync(PgxGraph graph)
Identifying weakly connected components can be useful for clustering graph data
|
<ID> PgxFuture<Partition<ID>> |
wccAsync(PgxGraph graph,
java.lang.String partitonDistributionName) |
<ID> PgxFuture<Partition<ID>> |
wccAsync(PgxGraph graph,
VertexProperty<ID,java.lang.Long> partitionDistribution)
Identifying weakly connected components can be useful for clustering graph data
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
boolean norm,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> VertexProperty<ID,java.lang.Double> |
weightedPagerank(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
boolean norm,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
double e,
double d,
int max,
boolean norm,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
double e,
double d,
int max,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges.
|
<ID> PgxFuture<VertexProperty<ID,java.lang.Double>> |
weightedPagerankAsync(PgxGraph graph,
EdgeProperty<java.lang.Double> weight,
VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
ID vertexId,
int topK)
Convenience wrapper around
whomToFollow(PgxGraph, PgxVertex, int) taking a vertex ID instead of a
PgxVertex . |
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
ID vertexId,
int topK,
int sizeCircleOfTrust)
Convenience wrapper around
whomToFollow(PgxGraph, PgxVertex, int, int) taking a vertex ID instead of a
PgxVertex . |
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
ID vertexId,
int topK,
int sizeCircleOfTrust,
int maxIter,
java.math.BigDecimal tol,
java.math.BigDecimal dampingFactor,
int salsaMaxIter,
java.math.BigDecimal salsaTol)
Convenience wrapper around
whomToFollow(PgxGraph, PgxVertex, int, int, int, BigDecimal, BigDecimal, int, BigDecimal)
taking a vertex ID instead of a PgxVertex . |
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
ID vertexId,
int topK,
int sizeCircleOfTrust,
int maxIter,
java.math.BigDecimal tol,
java.math.BigDecimal dampingFactor,
int salsaMaxIter,
java.math.BigDecimal salsaTol,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
Convenience wrapper around
whomToFollow(PgxGraph, PgxVertex, int, int, int, BigDecimal, BigDecimal, int, BigDecimal, VertexSequence,
VertexSequence)
taking a vertex ID instead of a PgxVertex . |
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
int maxIter,
double tol,
double dampingFactor,
int salsaMaxIter,
double salsaTol)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
int maxIter,
double tol,
double dampingFactor,
int salsaMaxIter,
double salsaTol,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> Pair<VertexSequence<ID>,VertexSequence<ID>> |
whomToFollow(PgxGraph graph,
PgxVertex<ID> vertex,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
int maxIter,
double tol,
double dampingFactor,
int salsaMaxIter,
double salsaTol)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
int maxIter,
double tol,
double dampingFactor,
int salsaMaxIter,
double salsaTol,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
int sizeCircleOfTrust,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
int topK,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
<ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> |
whomToFollowAsync(PgxGraph graph,
PgxVertex<ID> vertex,
VertexSequence<ID> hubs,
VertexSequence<ID> authorities)
WTF is a recommendation algorithm.
|
close, destroy, destroyAsync
public <V> EdgeProperty<java.lang.Double> adamicAdarCounting(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
The Adamic-Adar index is meant for undirected graphs, since it is computed using the degree of the shared neighbors by two vertices in the graph. This implementation computes the index for every pair of vertices connected by an edge and associates it with that edge.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E) with E = number of edges
O(E) with E = number of edges
graph
- the graph.
PgxGraph graph = ...;
EdgeProperty<Double> adamicAdar = analyst.adamicAdarCounting(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + adamicAdar.getName() + " MATCH (x) ORDER BY x." + adamicAdar.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <V> EdgeProperty<java.lang.Double> adamicAdarCounting(PgxGraph graph, EdgeProperty<java.lang.Double> aa) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
The Adamic-Adar index is meant for undirected graphs, since it is computed using the degree of the shared neighbors by two vertices in the graph. This implementation computes the index for every pair of vertices connected by an edge and associates it with that edge.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E) with E = number of edges
O(E) with E = number of edges
graph
- the graph.aa
- (out argument)
edge property holding the Adamic-Adar index of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> aa = graph.createEdgeProperty(PropertyType.DOUBLE);
EdgeProperty<Double> adamicAdar = analyst.adamicAdarCounting(graph, aa);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + adamicAdar.getName() + " MATCH (x) ORDER BY x." + adamicAdar.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxFuture<EdgeProperty<java.lang.Double>> adamicAdarCountingAsync(PgxGraph graph)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
The Adamic-Adar index is meant for undirected graphs, since it is computed using the degree of the shared neighbors by two vertices in the graph. This implementation computes the index for every pair of vertices connected by an edge and associates it with that edge.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E) with E = number of edges
O(E) with E = number of edges
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<EdgeProperty<Double>> promise = analyst.adamicAdarCountingAsync(graph);
promise.thenCompose(adamicAdar -> graph.queryPgqlAsync(
"SELECT x, x." + adamicAdar.getName() + " MATCH (x) ORDER BY x." + adamicAdar.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public PgxFuture<EdgeProperty<java.lang.Double>> adamicAdarCountingAsync(PgxGraph graph, EdgeProperty<java.lang.Double> aa)
The adamic-adar index compares the amount of neighbors shared between vertices, this measure can be used with communities.
The Adamic-Adar index is meant for undirected graphs, since it is computed using the degree of the shared neighbors by two vertices in the graph. This implementation computes the index for every pair of vertices connected by an edge and associates it with that edge.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E) with E = number of edges
O(E) with E = number of edges
graph
- the graph.aa
- (out argument)
edge property holding the Adamic-Adar index of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> aa = graph.createEdgeProperty(PropertyType.DOUBLE);
PgxFuture<EdgeProperty<Double>> promise = analyst.adamicAdarCountingAsync(graph, aa);
promise.thenCompose(adamicAdar -> graph.queryPgqlAsync(
"SELECT x, x." + adamicAdar.getName() + " MATCH (x) ORDER BY x." + adamicAdar.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>> allReachableVerticesEdges(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.Triple
containing a vertex-set with the vertices on the path, an edge-set with the edges on
the path and a map containing the distances from the source vertex for each vertex on the pathjava.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>>> allReachableVerticesEdgesAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.Triple
containing a vertex-set with the vertices on the path, an edge-set with the edges on
the path and a map containing the distances from the source vertex for each vertex on the pathpublic <ID> org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>> allReachableVerticesEdgesFiltered(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k, EdgeFilter filter) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.filter
- the filter to be used on edges when searching for a path.Triple
containing a vertex-set with the vertices on the path, an edge-set with the edges on
the path and a map containing the distances from the source vertex for each vertex on the pathjava.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<VertexSet<ID>,EdgeSet,PgxMap<PgxVertex<ID>,java.lang.Integer>>> allReachableVerticesEdgesFilteredAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k, EdgeFilter filter)
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
Finds all the vertices and edges on a path between the src and target of length smaller or equal to k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.filter
- the filter to be used on edges when searching for a path.Triple
containing a vertex-set with the vertices on the path, an edge-set with the edges on
the path and a map containing the distances from the source vertex for each vertex on the pathpublic <ID> VertexProperty<ID,java.lang.Double> approximateVertexBetweennessCentrality(PgxGraph graph, int k) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using k random vertices as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.k
- number of random vertices to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
VertexProperty<Integer, Double> betweenness = analyst.approximateVertexBetweennessCentrality(graph, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> approximateVertexBetweennessCentrality(PgxGraph graph, int k, VertexProperty<ID,java.lang.Double> bc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using k random vertices as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.k
- number of random vertices to be used to compute the approximated betweenness centrality coeficients.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> betweenness = analyst.approximateVertexBetweennessCentrality(graph, 100, bc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> approximateVertexBetweennessCentralityAsync(PgxGraph graph, int k)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using k random vertices as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.k
- number of random vertices to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.approximateVertexBetweennessCentralityAsync(
graph, 100);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> approximateVertexBetweennessCentralityAsync(PgxGraph graph, int k, VertexProperty<ID,java.lang.Double> bc)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using k random vertices as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.k
- number of random vertices to be used to compute the approximated betweenness centrality coeficients.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.approximateVertexBetweennessCentralityAsync(
graph, 100, bc);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
@SafeVarargs public final <ID> VertexProperty<ID,java.lang.Double> approximateVertexBetweennessCentralityFromSeeds(PgxGraph graph, PgxVertex<ID>... seeds) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using the vertices from the given sequence as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.seeds
- the (unique) chosen nodes to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> betweenness =
analyst.approximateVertexBetweennessCentralityFromSeeds(graph, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
@SafeVarargs public final <ID> VertexProperty<ID,java.lang.Double> approximateVertexBetweennessCentralityFromSeeds(PgxGraph graph, VertexProperty<ID,java.lang.Double> bc, PgxVertex<ID>... seeds) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using the vertices from the given sequence as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.seeds
- the (unique) chosen nodes to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> betweenness =
analyst.approximateVertexBetweennessCentralityFromSeeds(graph, bc, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
@SafeVarargs public final <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> approximateVertexBetweennessCentralityFromSeedsAsync(PgxGraph graph, PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using the vertices from the given sequence as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.seeds
- the (unique) chosen nodes to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
PgxVertex<Integer> v1 = graph.getVertex(128);
PgxVertex<Integer> v2 = graph.getVertex(333);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.approximateVertexBetweennessCentralityFromSeedsAsync(
graph, v1, v2);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
@SafeVarargs public final <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> approximateVertexBetweennessCentralityFromSeedsAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> bc, PgxVertex<ID>... seeds)
Faster, but less accurate than betweenness centrality, it identifies important vertices for the flow of information
This variant of betweenness centrality approximates the centrality of the vertices by just using the vertices from the given sequence as starting points for the BFS traversals of the graph, instead of computing the exact value by using all the vertices in the graph.
The implementation of this algorithm uses a built-in BFS method for the graph traversals. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.seeds
- the (unique) chosen nodes to be used to compute the approximated betweenness centrality coeficients.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxVertex<Integer> v1 = graph.getVertex(128);
PgxVertex<Integer> v2 = graph.getVertex(333);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.approximateVertexBetweennessCentralityFromSeedsAsync(
graph, bc, v1, v2);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Boolean> bipartiteCheck(PgxGraph graph, VertexProperty<ID,java.lang.Boolean> isLeft) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bipartite check verifies whether are graph is a bipartite graph.
This algorithm checks whether the given directed graph is bipartite. It assumes that all the edges are going in the same direction since the method relies on BFS traversals of the graph. If the graph is bipartite the algorithm will return the side of each vertex in the graph with the is_left vertex property.
The implementation of this algorithm uses a built-in BFS method for the graph traversals.
O(E) with E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.isLeft
- vertex property holding the side of each vertex in a bipartite graph (true for left, false for right).
PgxGraph graph = ...;
VertexProperty<Integer, Boolean> isLeft = graph.createVertexProperty(PropertyType.BOOLEAN);
VertexProperty<Integer, Boolean> bipartite = analyst.BipartiteCheck(graph, isLeft);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Boolean>> bipartiteCheckAsync(PgxGraph graph, VertexProperty<ID,java.lang.Boolean> isLeft)
Bipartite check verifies whether are graph is a bipartite graph.
This algorithm checks whether the given directed graph is bipartite. It assumes that all the edges are going in the same direction since the method relies on BFS traversals of the graph. If the graph is bipartite the algorithm will return the side of each vertex in the graph with the is_left vertex property.
The implementation of this algorithm uses a built-in BFS method for the graph traversals.
O(E) with E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.isLeft
- vertex property holding the side of each vertex in a bipartite graph (true for left, false for right).
PgxGraph graph = ...;
VertexProperty<Integer, Boolean> isLeft = graph.createVertexProperty(PropertyType.BOOLEAN);
PgxFuture<VertexProperty<Integer, Boolean>> promise = analyst.BipartiteCheck(graph, isLeft);
public <ID> VertexSet<ID> center(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexSet<Integer> center = analyst.center(graph);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexSet<ID> center(PgxGraph graph, VertexSet<ID> center) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.center
- (out argument) vertex set holding the vertices from the periphery or center of the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.createVertexSet();
VertexSet<Integer> center = analyst.center(graph, vertices);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexSet<ID>> centerAsync(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexSet<Integer>> promise = analyst.centerAsync(graph);
promise.thenAccept(center -> {
...;
});
public <ID> PgxFuture<VertexSet<ID>> centerAsync(PgxGraph graph, VertexSet<ID> center)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.center
- (out argument) vertex set holding the vertices from the periphery or center of the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.createVertexSet();
PgxFuture<VertexSet<Integer>> promise = analyst.centerAsync(graph, vertices);
promise.thenAccept(center -> {
...;
});
public <ID> VertexProperty<ID,java.lang.Double> closenessCentralityDoubleLength(PgxGraph graph, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
This variant of Closeness Centrality takes into account the weights from the edges when computing the reciprocal of the sum of all the distances from the possible shortests paths starting from the vertex V, for every vertex in the graph. The weights of the edges must be positive values greater than 0.
The implementation of is more expensive to compute than the normal closeness centrality because of the inclusion of the weights in the edges which influence the selection of edges for the shortest paths. It is an expensive algorithm to run on large graphs.
O(V * E * d) with E = number of edges, V = number of vertices, d = diameter of the graph
O(5 * V) with V = number of vertices
graph
- the graph.cost
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> closeness = analyst.closenessCentralityDoubleLength(graph, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> closenessCentralityDoubleLength(PgxGraph graph, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> cc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
This variant of Closeness Centrality takes into account the weights from the edges when computing the reciprocal of the sum of all the distances from the possible shortests paths starting from the vertex V, for every vertex in the graph. The weights of the edges must be positive values greater than 0.
The implementation of is more expensive to compute than the normal closeness centrality because of the inclusion of the weights in the edges which influence the selection of edges for the shortest paths. It is an expensive algorithm to run on large graphs.
O(V * E * d) with E = number of edges, V = number of vertices, d = diameter of the graph
O(5 * V) with V = number of vertices
graph
- the graph.cost
- edge property holding the weight of each edge in the graph.cc
- (out argument)
vertex property holding the closeness centrality value for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> cc = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> closeness = analyst.closenessCentralityDoubleLength(graph, cost, cc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> closenessCentralityDoubleLengthAsync(PgxGraph graph, EdgeProperty<java.lang.Double> cost)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
This variant of Closeness Centrality takes into account the weights from the edges when computing the reciprocal of the sum of all the distances from the possible shortests paths starting from the vertex V, for every vertex in the graph. The weights of the edges must be positive values greater than 0.
The implementation of is more expensive to compute than the normal closeness centrality because of the inclusion of the weights in the edges which influence the selection of edges for the shortest paths. It is an expensive algorithm to run on large graphs.
O(V * E * d) with E = number of edges, V = number of vertices, d = diameter of the graph
O(5 * V) with V = number of vertices
graph
- the graph.cost
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.closenessCentralityDoubleLengthAsync(graph, cost);
promise.thenCompose(closeness -> graph.queryPgqlAsync(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> closenessCentralityDoubleLengthAsync(PgxGraph graph, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on weighted distances, allowing to find well-connected vertices
This variant of Closeness Centrality takes into account the weights from the edges when computing the reciprocal of the sum of all the distances from the possible shortests paths starting from the vertex V, for every vertex in the graph. The weights of the edges must be positive values greater than 0.
The implementation of is more expensive to compute than the normal closeness centrality because of the inclusion of the weights in the edges which influence the selection of edges for the shortest paths. It is an expensive algorithm to run on large graphs.
O(V * E * d) with E = number of edges, V = number of vertices, d = diameter of the graph
O(5 * V) with V = number of vertices
graph
- the graph.cost
- edge property holding the weight of each edge in the graph.cc
- (out argument)
vertex property holding the closeness centrality value for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> cc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.closenessCentralityDoubleLengthAsync(graph, cost, cc);
promise.thenCompose(closeness -> graph.queryPgqlAsync(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Double> closenessCentralityUnitLength(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
The Closeness Centrality of a node V is the reciprocal of the sum of all the distances from the possible shortests paths starting from V. Thus the higher the centrality value of V, the closer it is to all the other vertices in the graph. This implementation is meant for undirected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> closeness = analyst.closenessCentralityUnitLength(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> closenessCentralityUnitLength(PgxGraph graph, VertexProperty<ID,java.lang.Double> cc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
The Closeness Centrality of a node V is the reciprocal of the sum of all the distances from the possible shortests paths starting from V. Thus the higher the centrality value of V, the closer it is to all the other vertices in the graph. This implementation is meant for undirected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.cc
- (out argument)
node property holding the closeness centrality value for each node.
PgxGraph graph = ...;
VertexProperty<Integer, Double> cc = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> closeness = analyst.closenessCentralityUnitLength(graph, cc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> closenessCentralityUnitLengthAsync(PgxGraph graph)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
The Closeness Centrality of a node V is the reciprocal of the sum of all the distances from the possible shortests paths starting from V. Thus the higher the centrality value of V, the closer it is to all the other vertices in the graph. This implementation is meant for undirected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.closenessCentralityUnitLengthAsync(graph);
promise.thenCompose(closeness -> graph.queryPgqlAsync(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> closenessCentralityUnitLengthAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> cc)
Closenness centrality measures the centrality of the vertices based on distances, allowing to find well-connected vertices
The Closeness Centrality of a node V is the reciprocal of the sum of all the distances from the possible shortests paths starting from V. Thus the higher the centrality value of V, the closer it is to all the other vertices in the graph. This implementation is meant for undirected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.cc
- (out argument)
node property holding the closeness centrality value for each node.
PgxGraph graph = ...;
VertexProperty<Integer, Double> cc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.closenessCentralityUnitLengthAsync(graph, cc);
promise.thenCompose(closeness -> graph.queryPgqlAsync(
"SELECT x, x." + closeness.getName() + " MATCH (x) ORDER BY x." + closeness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Partition<ID> communitiesConductanceMinimization(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.
PgxGraph graph = ...;
Partition<Integer> conductance = analyst.communitiesConductanceMinimization(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesConductanceMinimization(PgxGraph graph, int maxIterations) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.maxIterations
- maximum number of iterations that will be performed. For most graphs, a maximum of 100 iterations should be enough.
PgxGraph graph = ...;
Partition<Integer> conductance = analyst.communitiesConductanceMinimization(graph, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesConductanceMinimization(PgxGraph graph, int maxIterations, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.maxIterations
- maximum number of iterations that will be performed. For most graphs, a maximum of 100 iterations should be enough.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> conductance = analyst.communitiesConductanceMinimization(graph, 100, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesConductanceMinimization(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> conductance = analyst.communitiesConductanceMinimization(graph, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> communitiesConductanceMinimizationAsync(PgxGraph graph)
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.communitiesConductanceMinimizationAsync(graph);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesConductanceMinimizationAsync(PgxGraph graph, int max)
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.max
- maximum number of iterations that will be performed. For most graphs, a maximum of 100 iterations should be enough.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.communitiesConductanceMinimizationAsync(graph, 100);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesConductanceMinimizationAsync(PgxGraph graph, int maxIterations, VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.maxIterations
- maximum number of iterations that will be performed. For most graphs, a maximum of 100 iterations should be enough.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesConductanceMinimizationAsync(graph, 100, pd);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesConductanceMinimizationAsync(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution)
Soman and Narang can find communities in a graph taking weighted edges into account
The algorithm proposed by Soman and Narang to find community structures in a graph can be regarded as a variant of the label propagation algorithm, since it takes into account weights over the edges when looking for the community assignments. This implementation generates the weight of the edges by using the triangles in the graph, and just like label propagation, it assigns a unique community label to each vertex in the graph at the beginning, which is then updated on each iteration by looking and choosing the most frequent label from the labels of their neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. Also note that there is a possibility of constant vertex swapping. For example, if you have two vertices A and B in your graph. It can happen that A will acquire the community of B but at the same time B will acquire the community of A. That is why you should set a maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(5 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the graph.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesConductanceMinimizationAsync(graph, pd);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Partition<ID> communitiesInfomap(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> promise = analyst.communitiesInfomap(graph, rank, weight);
first_component = promise.getPartitionByIndex(0)
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesInfomap(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, double tau, double tol, int maxIter) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.tau
- damping factor.tol
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> promise = analyst.communitiesInfomap(graph, rank, weight, 0.15, 0.0001, 10);
first_component = promise.getPartitionByIndex(0)
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesInfomap(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, double tau, double tol, int maxIter, VertexProperty<ID,java.lang.Long> module) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.tau
- damping factor.tol
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.module
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> promise = analyst.communitiesInfomap(graph, rank, weight, 0.15, 0.0001, 10, module);
first_component = promise.getPartitionByIndex(0)
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesInfomap(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Long> module) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.module
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> promise = analyst.communitiesInfomap(graph, rank, weight, module);
first_component = promise.getPartitionByIndex(0)
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> communitiesInfomapAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight)
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesInfomapAsync(graph, rank, weight);
partition = promise.get()
first_component = partition.getPartitionByIndex(0)
public <ID> PgxFuture<Partition<ID>> communitiesInfomapAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, double tau, double tol, int maxIter)
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.tau
- damping factor.tol
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesInfomapAsync(graph, rank, weight, 0.15, 0.0001, 10);
partition = promise.get()
first_component = partition.getPartitionByIndex(0)
public <ID> PgxFuture<Partition<ID>> communitiesInfomapAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, double tau, double tol, int maxIter, VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.tau
- damping factor.tol
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.module
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesInfomapAsync(graph, rank, weight, 0.15, 0.0001,
10, module);
partition = promise.get()
first_component = partition.getPartitionByIndex(0)
public <ID> PgxFuture<Partition<ID>> communitiesInfomapAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Long> module)
Infomap can find high quality communities in a graph.
Infomap is a robust algorithm designed to find community structures in a graph that requires some pre-processing steps. This implementation needs a reciprocated or an undirected graph, as well as the ranking score from the normalized weighted version of the Pagerank algorithm. It will assign a unique module (or community) label to each vertex in the graph based on their Pagerank score, edge weights and the labels of their neighbors. It is an iterative algorithm that updates the labels of the vertices in random order on each iteration using the previous factors, converging once there are no further changes in the vertex labels, or once the maximum number of iterations is reached. The algorithm is non-deterministic because of the random order for visiting and updating the vertex labels, thus the communities found might be different each time the algorithm is run.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic. It is an expensive algorithm to run on large graphs.
O((k ^ 2) * E) with E = number of edges, k <= maximum number of iterations< code>=>
O(10 * V + 2 * E) with V = number of vertices, E = number of edges
graph
- the undirected graph.rank
- vertex property holding the normalized (weighted) PageRank value for each vertex (a value between 0 and 1).weight
- edge property holding the weight of each edge in the graph.module
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = analyst.weightedPagerank(graph, 1e-16, 0.85, 1000, true, weight);
VertexProperty<Integer, Long> module = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesInfomapAsync(graph, rank, weight, module);
partition = promise.get()
first_component = partition.getPartitionByIndex(0)
public <ID> Partition<ID> communitiesLabelPropagation(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Partition<Integer> conductance = analyst.communitiesLabelPropagation(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesLabelPropagation(PgxGraph graph, int maxIterations) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.maxIterations
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
Partition<Integer> conductance = analyst.communitiesLabelPropagation(graph, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesLabelPropagation(PgxGraph graph, int maxIterations, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.maxIterations
- maximum number of iterations that will be performed.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> conductance = analyst.communitiesLabelPropagation(graph, 100, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> communitiesLabelPropagation(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitonDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.partitonDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> conductance = analyst.communitiesLabelPropagation(graph, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> communitiesLabelPropagationAsync(PgxGraph graph)
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.communitiesLabelPropagationAsync(graph);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesLabelPropagationAsync(PgxGraph graph, int maxIterations)
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.maxIterations
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.communitiesLabelPropagationAsync(graph, 100);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesLabelPropagationAsync(PgxGraph graph, int maxIterations, VertexProperty<ID,java.lang.Long> partitionDistribution)
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.maxIterations
- maximum number of iterations that will be performed.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesLabelPropagationAsync(graph, 100, pd);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> communitiesLabelPropagationAsync(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution)
Label propagation can find communities in a graph relatively fast
Label Propagation is an algorithm designed to find community structures in a graph. It assigns a unique community label to each vertex in the graph, which then is updated on each iteration by looking and choosing the most frequent label amongst those from its neighbors. Convergence is achieved once the label of each vertex is the same as the most frequent one amongst its neighbors, i.e. when there are no changes in the communities assigned to the vertices in one iteration.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the community assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.communitiesLabelPropagationAsync(graph, pd);
promise.thenCompose(conductance -> graph.queryPgqlAsync(
"SELECT x, x." + conductance.getPropertyName() + " MATCH (x) ORDER BY x." + conductance.getPropertyName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>> computeHighDegreeVertices(PgxGraph graph, int k) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k vertices with the highest degrees in the graph.
Computes the k vertices with the highest degrees in the graph. The resulting map will contain a mapping with the sorted index to the high-degree vertex with the index.
null
O(N log N) with N = number of vertices
O(k) with V = number of vertices
graph
- the graph.k
- number of high-degree vertices to be computed.k
high-degree vertices and their indices and a second
vertex set containing the same vertices.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>> computeHighDegreeVertices(PgxGraph graph, int k, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k vertices with the highest degrees in the graph.
Computes the k vertices with the highest degrees in the graph. The resulting map will contain a mapping with the sorted index to the high-degree vertex with the index.
null
O(N log N) with N = number of vertices
O(k) with V = number of vertices
graph
- the graph.k
- number of high-degree vertices to be computed.highDegreeVertexMapping
- (out argument)
the high-degree vertices.highDegreeVertices
- (out argument)
the high-degree vertices.k
high-degree vertices and their indices and a
second vertex set containing the same vertices.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>>> computeHighDegreeVerticesAsync(PgxGraph graph, int k)
Computes the k vertices with the highest degrees in the graph.
Computes the k vertices with the highest degrees in the graph. The resulting map will contain a mapping with the sorted index to the high-degree vertex with the index.
null
O(N log N) with N = number of vertices
O(k) with V = number of vertices
graph
- the graph.k
- number of high-degree vertices to be computed.k
high-degree vertices and their indices and a second
vertex set containing the same vertices.public <ID> PgxFuture<Pair<PgxMap<java.lang.Integer,PgxVertex<ID>>,VertexSet<ID>>> computeHighDegreeVerticesAsync(PgxGraph graph, int k, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices)
Computes the k vertices with the highest degrees in the graph.
Computes the k vertices with the highest degrees in the graph. The resulting map will contain a mapping with the sorted index to the high-degree vertex with the index.
null
O(N log N) with N = number of vertices
O(k) with V = number of vertices
graph
- the graph.k
- number of high-degree vertices to be computed.highDegreeVertexMapping
- (out-argument)
the high-degree vertices.highDegreeVertices
- (out-argument)
the high-degree vertices.k
high-degree vertices and their indices and a second
vertex set containing the same vertices.public <ID> Scalar<java.lang.Double> conductance(PgxGraph graph, Partition<ID> partition, long partitionIndex) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Conductance assesses the quality of a partition in a graph
Conductance in a graph is computed for a specific cut of it. A cut is a partition of the graph into two subsets (components), disconnecting the graph if the edges from the cut are removed. Thus the algorithm requires a labeling for the vertices in the different subsets of the graph, then the conductance is computed by the ratio of the edges belonging to the given cut (i.e. the edges that split the graph into disconnected components) and the edges belonging to each of these subsets. If there is more than one cut (or partition), this implementation will take the given component number as reference to compute the conductance associated with that particular cut.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.partitionIndex
- number of the component to be used for computing its conductance.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> conductance = analyst.conductance(graph, partition, 0);
conductance.get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Scalar<java.lang.Double> conductance(PgxGraph graph, Partition<ID> partition, long partitionIndex, Scalar<java.lang.Double> conductance) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Conductance assesses the quality of a partition in a graph
Conductance in a graph is computed for a specific cut of it. A cut is a partition of the graph into two subsets (components), disconnecting the graph if the edges from the cut are removed. Thus the algorithm requires a labeling for the vertices in the different subsets of the graph, then the conductance is computed by the ratio of the edges belonging to the given cut (i.e. the edges that split the graph into disconnected components) and the edges belonging to each of these subsets. If there is more than one cut (or partition), this implementation will take the given component number as reference to compute the conductance associated with that particular cut.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.partitionIndex
- number of the component to be used for computing its conductance.conductance
- Scalar (double) to store the conductance value of the given cut.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> scalar = graph.createScalar(PropertyType.DOUBLE);
Scalar<Double> conductance = analyst.conductance(graph, partition, 0, scalar);
conductance.get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Scalar<java.lang.Double>> conductanceAsync(PgxGraph graph, Partition<ID> partition, long partitionIndex)
Conductance assesses the quality of a partition in a graph
Conductance in a graph is computed for a specific cut of it. A cut is a partition of the graph into two subsets (components), disconnecting the graph if the edges from the cut are removed. Thus the algorithm requires a labeling for the vertices in the different subsets of the graph, then the conductance is computed by the ratio of the edges belonging to the given cut (i.e. the edges that split the graph into disconnected components) and the edges belonging to each of these subsets. If there is more than one cut (or partition), this implementation will take the given component number as reference to compute the conductance associated with that particular cut.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.partitionIndex
- number of the component to be used for computing its conductance.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
PgxFuture<Scalar<Double>> promise = analyst.conductanceAsync(graph, partition, 0);
promise.thenAccept(conductance -> {
conductance.get();
});
public <ID> PgxFuture<Scalar<java.lang.Double>> conductanceAsync(PgxGraph graph, Partition<ID> partition, long partitionIndex, Scalar<java.lang.Double> conductance)
Conductance assesses the quality of a partition in a graph
Conductance in a graph is computed for a specific cut of it. A cut is a partition of the graph into two subsets (components), disconnecting the graph if the edges from the cut are removed. Thus the algorithm requires a labeling for the vertices in the different subsets of the graph, then the conductance is computed by the ratio of the edges belonging to the given cut (i.e. the edges that split the graph into disconnected components) and the edges belonging to each of these subsets. If there is more than one cut (or partition), this implementation will take the given component number as reference to compute the conductance associated with that particular cut.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.partitionIndex
- number of the component to be used for computing its conductance.conductance
- Scalar (double) to store the conductance value of the given cut.
PgxGraph graph = ...;
Scalar<Double> scalar = graph.createScalar(PropertyType.DOUBLE);
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
PgxFuture<Scalar<Double>> promise = analyst.conductanceAsync(graph, partition, 0, scalar);
promise.thenAccept(conductance -> {
conductance.get();
});
public long countTriangles(PgxGraph graph, boolean sortVerticesByDegree) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
triangle counting gives an overview of the amount of connections between vertices in neighborhoods
This algorithm is intended for directed graphs and will count all the existing triangles on it. To run the algorithm on undirected graphs, use the undirected version.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E ^ 1.5) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.sortVerticesByDegree
- boolean flag for sorting the nodes by their degree as preprocessing step.
PgxGraph graph = ...;
long result = analyst.countTriangles(graph, true);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxFuture<java.lang.Long> countTrianglesAsync(PgxGraph graph, boolean sortVerticesByDegree)
triangle counting gives an overview of the amount of connections between vertices in neighborhoods
This algorithm is intended for directed graphs and will count all the existing triangles on it. To run the algorithm on undirected graphs, use the undirected version.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E ^ 1.5) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.sortVerticesByDegree
- boolean flag for sorting the nodes by their degree as preprocessing step.
PgxGraph graph = ...;
PgxFuture<Long> promise = analyst.countTrianglesAsync(graph, true);
promise.thenAccept(result -> {
...;
});
public <ID> VertexProperty<ID,PgxVect<java.lang.Integer>> createDistanceIndex(PgxGraph graph, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes an index with distances to each high-degree vertex.
Computes an index which contains the distance to the given high-degree vertices for every node in the graph.
null
O(E * k) with E = number of edges, k <= number of high-degree vertices< code>=>
O(V * k) with V = number of vertices
graph
- the graph.highDegreeVertexMapping
- map containing the high-degree vertices as values and indices from 0 to k as keys.highDegreeVertices
- a set containing the high-degree vertices.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,PgxVect<java.lang.Integer>> createDistanceIndex(PgxGraph graph, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes an index with distances to each high-degree vertex.
Computes an index which contains the distance to the given high-degree vertices for every node in the graph.
null
O(E * k) with E = number of edges, k <= number of high-degree vertices< code>=>
O(V * k) with V = number of vertices
graph
- the graph.highDegreeVertexMapping
- map containing the high-degree vertices as values and indices from 0 to k as keys.highDegreeVertices
- a set containing the high-degree vertices.index
- (out-argument)
the index containing the distances to each high-degree vertex for all vertices.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,PgxVect<java.lang.Integer>>> createDistanceIndexAsync(PgxGraph graph, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices)
Computes an index with distances to each high-degree vertex.
Computes an index which contains the distance to the given high-degree vertices for every node in the graph.
null
O(E * k) with E = number of edges, k <= number of high-degree vertices< code>=>
O(V * k) with V = number of vertices
graph
- the graph.highDegreeVertexMapping
- map containing the high-degree vertices as values and indices from 0 to k as keys.highDegreeVertices
- a set containing the high-degree vertices.public <ID> PgxFuture<VertexProperty<ID,PgxVect<java.lang.Integer>>> createDistanceIndexAsync(PgxGraph graph, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes an index with distances to each high-degree vertex.
Computes an index which contains the distance to the given high-degree vertices for every node in the graph.
null
O(E * k) with E = number of edges, k <= number of high-degree vertices< code>=>
O(V * k) with V = number of vertices
graph
- the graph.highDegreeVertexMapping
- map containing the high-degree vertices as values and indices from 0 to k as keys.highDegreeVertices
- a set containing the high-degree vertices.index
- (out-argument)
the index containing the distances to each high-degree vertex for all vertices.public oracle.pgx.api.beta.mllib.DeepWalkModelBuilder deepWalkModelBuilder()
public <ID> VertexProperty<ID,java.lang.Integer> degreeCentrality(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Degree centrality counts the number of outgoing and incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> degree = analyst.degreeCentrality(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Integer> degreeCentrality(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Degree centrality counts the number of outgoing and incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, Integer> degree = analyst.degreeCentrality(graph, dc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> degreeCentralityAsync(PgxGraph graph)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Degree centrality counts the number of outgoing and incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.degreeCentralityAsync(graph);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> degreeCentralityAsync(PgxGraph graph, java.lang.String propertyName)
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> degreeCentralityAsync(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc)
Degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Degree centrality counts the number of outgoing and incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.degreeCentralityAsync(graph, dc);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> diameter(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Pair<Scalar<Integer>, VertexProperty<Integer, Integer>> diameter = analyst.diameter(graph);
diameter.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + diameter.getSecond().getName() + " MATCH (x) ORDER BY x." + diameter.getSecond().getName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> diameter(PgxGraph graph, Scalar<java.lang.Integer> diameter, VertexProperty<ID,java.lang.Integer> eccentricity) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.diameter
- Scalar (integer) for holding the value of the diameter of the graph.eccentricity
- (out argument) vertex property holding the eccentricity value for each vertex.
PgxGraph graph = ...;
Scalar<Integer> scalar = graph.createScalar(PropertyType.INTEGER);
VertexProperty<Integer, Integer> prop = graph.CreateVertexProperty(PropertyType.INTEGER);
Pair<Scalar<Integer>, VertexProperty<Integer, Integer>> diameter = analyst.diameter(graph, scalar, prop);
diameter.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + diameter.getSecond().getName() + " MATCH (x) ORDER BY x." + diameter.getSecond().getName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> diameterAsync(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Pair<Scalar<Integer>, VertexProperty<Integer, Integer>>> promise = analyst.diameterAsync(graph);
promise.thenCompose(diameter -> {
diameter.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + diameter.getSecond().getName() + " MATCH (x) ORDER BY x." + diameter.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> diameterAsync(PgxGraph graph, Scalar<java.lang.Integer> diameter, VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.diameter
- Scalar (integer) for holding the value of the diameter of the graph.eccentricity
- (out argument) vertex property holding the eccentricity value for each vertex.
PgxGraph graph = ...;
Scalar<Integer> scalar = graph.createScalar(PropertyType.INTEGER);
VertexProperty<Integer, Integer> prop = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<Pair<Scalar<Integer>, VertexProperty<Integer, Integer>>> promise = analyst.diameterAsync(
graph, scalar, prop);
promise.thenCompose(diameter -> {
diameter.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + diameter.getSecond().getName() + " MATCH (x) ORDER BY x." + diameter.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> VertexProperty<ID,java.lang.Double> eigenvectorCentrality(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> eigenvector = analyst.eigenvectorCentrality(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> eigenvectorCentrality(PgxGraph graph, int max, double maxDiff, boolean useL2Norm, boolean useInEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.useL2Norm
- boolean flag to determine whether the algorithm will use the l2 norm (Euclidean norm) or the l1 norm (absolute value) to normalize the centrality scores.useInEdge
- boolean flag to determine whether the algorithm will use the incoming or the outgoing edges in the graph for the computations.
PgxGraph graph = ...;
VertexProperty<Integer, Double> eigenvector = analyst.eigenvectorCentrality(graph, 100, 0.001, false, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> eigenvectorCentrality(PgxGraph graph, int max, double maxDiff, boolean useL2Norm, boolean useInEdge, VertexProperty<ID,java.lang.Double> ec) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.useL2Norm
- boolean flag to determine whether the algorithm will use the l2 norm (Euclidean norm) or the l1 norm (absolute value) to normalize the centrality scores.useInEdge
- boolean flag to determine whether the algorithm will use the incoming or the outgoing edges in the graph for the computations.ec
- (out argument)
vertex property holding the normalized centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> ec = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> eigenvector = analyst.eigenvectorCentrality(graph, 100, 0.001, false, false, ec);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> eigenvectorCentrality(PgxGraph graph, VertexProperty<ID,java.lang.Double> ec) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.ec
- (out argument)
vertex property holding the normalized centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> ec = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> eigenvector = analyst.eigenvectorCentrality(graph, ec);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> eigenvectorCentralityAsync(PgxGraph graph)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.eigenvectorCentralityAsync(graph);
promise.thenCompose(eigenvector -> graph.queryPgqlAsync(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> eigenvectorCentralityAsync(PgxGraph graph, int max, double maxDiff, boolean useL2Norm, boolean useInEdge)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.useL2Norm
- boolean flag to determine whether the algorithm will use the l2 norm (Euclidean norm) or the l1 norm (absolute value) to normalize the centrality scores.useInEdge
- boolean flag to determine whether the algorithm will use the incoming or the outgoing edges in the graph for the computations.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.eigenvectorCentralityAsync(
graph, 100, 0.001, false, false);
promise.thenCompose(eigenvector -> graph.queryPgqlAsync(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> eigenvectorCentralityAsync(PgxGraph graph, int max, double maxDiff, boolean useL2Norm, boolean useInEdge, VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.useL2Norm
- boolean flag to determine whether the algorithm will use the l2 norm (Euclidean norm) or the l1 norm (absolute value) to normalize the centrality scores.useInEdge
- boolean flag to determine whether the algorithm will use the incoming or the outgoing edges in the graph for the computations.ec
- (out argument)
vertex property holding the normalized centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> ec = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.eigenvectorCentralityAsync(
graph, 100, 0.001, false, false, ec);
promise.thenCompose(eigenvector -> graph.queryPgqlAsync(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> eigenvectorCentralityAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> ec)
Eigenvector centrality gets the centrality of the vertices in an intrincated way using neighbors, allowing to find well-connected vertices
The Eigenvector Centrality determines the centrality of a vertex by adding and weighting the centrality of its neighbors. Using outgoing or incoming edges when computing the eigenvector centrality will be equivalent to do so with the normal or the transpose adjacency matrix, respectively leading to the "right" and "left" eigenvectors.
The implementation of this algorithm uses the power iteration method.
O(V * k) with V = number of vertices, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.ec
- (out argument)
vertex property holding the normalized centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> ec = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.eigenvectorCentralityAsync(graph, ec);
promise.thenCompose(eigenvector -> graph.queryPgqlAsync(
"SELECT x, x." + eigenvector.getName() + " MATCH (x) ORDER BY x." + eigenvector.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> org.apache.commons.lang3.tuple.Triple<ScalarSequence<java.lang.Integer>,VertexSequence<ID>,EdgeSequence> enumerateSimplePaths(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k, VertexSet verticesOnPath, EdgeSet edgesOnPath, PgxMap<PgxVertex<ID>,java.lang.Integer> dist) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Enumerate all simple paths between the source and destination vertex
Enumerate all simple paths between the source and destination vertex
null
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.verticesOnPath
- the vertices on the path.edgesOnPath
- the edges on the path.dist
- map containing the distance from the source vertex for each vertex on a path.Triple
containing a sequence containing the path lengths, a vertex-sequence containing the
vertices on the paths and an edge-sequence containing the edges on the pathsjava.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<org.apache.commons.lang3.tuple.Triple<ScalarSequence<java.lang.Integer>,VertexSequence<ID>,EdgeSequence>> enumerateSimplePathsAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int k, VertexSet verticesOnPath, EdgeSet edgesOnPath, PgxMap<PgxVertex<ID>,java.lang.Integer> dist)
Enumerate all simple paths between the source and destination vertex
Enumerate all simple paths between the source and destination vertex
null
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.k
- the dimension of the distances property; i.e. number of high-degree vertices.verticesOnPath
- the vertices on the path.edgesOnPath
- the edges on the path.dist
- map containing the distance from the source vertex for each vertex on a path.Triple
containing a sequence containing the path lengths, a vertex-sequence containing the
vertices on the paths and an edge-sequence containing the edges on the pathspublic <ID> AllPaths<ID> fattestPath(PgxGraph graph, ID rootId, EdgeProperty<java.lang.Double> capacity) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
fattestPath(PgxGraph, PgxVertex, EdgeProperty)
taking a vertex ID instead of a
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> fattestPath(PgxGraph graph, ID rootId, EdgeProperty<java.lang.Double> capacity, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
fattestPath(PgxGraph, PgxVertex, EdgeProperty, VertexProperty,
VertexProperty, VertexProperty)
taking a vertex ID instead of a
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> fattestPath(PgxGraph graph, PgxVertex<ID> root, EdgeProperty<java.lang.Double> capacity) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
The Fattest path algorithm can be regarded as a variant of Dijkstra's algorithm, it tries to find the fattest path between the given source and all the reachable vertices in the graph. The fatness of a path is equal to the minimum value of the capacity from the edges that take part in the path, thus a fattest path is conformed by the edges with the largest possible capacity.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.capacity
- edge property holding the capacity of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> root = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
AllPaths<Integer> fattestPath = analyst.fattestPath(graph, root, cost);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> fattestPath(PgxGraph graph, PgxVertex<ID> root, EdgeProperty<java.lang.Double> capacity, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
The Fattest path algorithm can be regarded as a variant of Dijkstra's algorithm, it tries to find the fattest path between the given source and all the reachable vertices in the graph. The fatness of a path is equal to the minimum value of the capacity from the edges that take part in the path, thus a fattest path is conformed by the edges with the largest possible capacity.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.capacity
- edge property holding the capacity of each edge in the graph.distance
- (out argument) vertex property holding the capacity value of the fattest path up to the current vertex. The fatness value for the source vertex will be INF, while it will be 0 for all the vertices that are not reachable from the source.parent
- (out argument) vertex property holding the parent vertex of the each vertex in the fattest path.parentEdge
- (out argument) vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> root = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
AllPaths<Integer> fattestPath = analyst.fattestPath(graph, root, cost, distance, parent, parentEdge);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<AllPaths<ID>> fattestPathAsync(PgxGraph graph, PgxVertex<ID> root, EdgeProperty<java.lang.Double> capacity)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
The Fattest path algorithm can be regarded as a variant of Dijkstra's algorithm, it tries to find the fattest path between the given source and all the reachable vertices in the graph. The fatness of a path is equal to the minimum value of the capacity from the edges that take part in the path, thus a fattest path is conformed by the edges with the largest possible capacity.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.capacity
- edge property holding the capacity of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> root = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<AllPaths<Integer>> promise = analyst.fattestPathAsync(graph, root, cost);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxFuture<AllPaths<ID>> fattestPathAsync(PgxGraph graph, PgxVertex<ID> root, EdgeProperty<java.lang.Double> capacity, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Fattest path is a fast algorithm for finding a shortest path adding constraints for flowing related matters
The Fattest path algorithm can be regarded as a variant of Dijkstra's algorithm, it tries to find the fattest path between the given source and all the reachable vertices in the graph. The fatness of a path is equal to the minimum value of the capacity from the edges that take part in the path, thus a fattest path is conformed by the edges with the largest possible capacity.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.capacity
- edge property holding the capacity of each edge in the graph.distance
- (out argument) vertex property holding the capacity value of the fattest path up to the current vertex. The fatness value for the source vertex will be INF, while it will be 0 for all the vertices that are not reachable from the source.parent
- (out argument) vertex property holding the parent vertex of the each vertex in the fattest path.parentEdge
- (out argument) vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> root = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<AllPaths<Integer>> promise = analyst.fattestPathAsync(graph, root, cost, distance, parent, parentEdge);
promise.thenAccept(paths -> {
...;
});
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredBfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator, true);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator, true, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator, true, distance, parent);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator, true, distance, parent);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredBfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> bfs =
analyst.filteredBfs(graph, vertex, navigator, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, 2);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, navigator);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, filter, navigator, true);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, navigator, true, 2);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, navigator, true, 2, distance, parent);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, navigator, true, distance, parent);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredBfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, int maxDepth)
A Breadth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the BFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredBfsAsync(graph, vertex, navigator, 2);
promise.thenCompose(bfs -> graph.queryPgqlAsync(
"SELECT x, x." + bfs.getFirst().getName() + ", x." + bfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
bfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, int, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, boolean, VertexProperty,
VertexProperty)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, ID root, VertexFilter filter, VertexFilter navigator, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
filteredDfs(PgxGraph, PgxVertex, VertexFilter, VertexFilter, int)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator, true);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator, true, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator, true, distance, parent);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator, true, distance, parent);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>> filteredDfs(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, int maxDepth) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>> dfs =
analyst.filteredDfs(graph, vertex, navigator, 2);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, 2);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator, true);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator, true, 2);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, int maxDepth, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.maxDepth
- maximum depth limit for the BFS traversal.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator, true, 2, distance, parent);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, boolean initWithInf, VertexProperty<ID,java.lang.Integer> distance, VertexProperty<ID,PgxVertex<ID>> parent)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.initWithInf
- boolean flag to set the initial distance values of the vertices. If set to true, it will initialize the distances as INF, and -1 otherwise.distance
- (out argument) vertex property holding the hop distance for each reachable vertex in the graph.parent
- (out argument) vertex property holding the parent vertex of the each reachable vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter navigator = VertexFilter.ALL;
VertexProperty<Integer, Integer> distance = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator, true, distance, parent);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Integer>,VertexProperty<ID,PgxVertex<ID>>>> filteredDfsAsync(PgxGraph graph, PgxVertex<ID> root, VertexFilter navigator, int maxDepth)
A Depth-First Search implementation with an option to filter edges during the traversal of the graph.
This filtered version of the DFS algorithm allows to use a filter and a navigator expression to be evaluated over the vertices during the traversal and discriminate them according to the desired criteria. It will return the distance to the source vertex and the corresponding parent vertex for all the filtered vertices.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.root
- the source vertex from the graph for the path.navigator
- navigator expression to be evaluated on the vertices during the graph traversal.maxDepth
- maximum depth limit for the BFS traversal.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexFilter filter = VertexFilter.ALL;
VertexFilter navigator = VertexFilter.ALL;
PgxFuture<Pair<VertexProperty<Integer, Integer>, VertexProperty<Integer, PgxVertex<Integer>>>> promise =
analyst.filteredDfsAsync(graph, vertex, filter, navigator, 2);
promise.thenCompose(dfs -> graph.queryPgqlAsync(
"SELECT x, x." + dfs.getFirst().getName() + ", x." + dfs.getSecond().getName() + " MATCH (x) ORDER BY x." +
dfs.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxPath<ID> findCycle(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Find cycle looks for any loop in the graph.
This algorithm tries to find a cycle in a directed graph using DFS traversals and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. The algorithm is expensive because it will perform DFS traversals using different vertices as starting points until it explores the whole graph (worst-case scenario), or until it finds a cycle.
The implementation of this algorithm uses a built-in DFS feature. It is an expensive algorithm to run on large graphs.
O(V * (V + E)) with V = number of vertices, E = number of edges
O(5 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.
PgxGraph graph = ...;
PgxPath<Integer> cycle = analyst.findCycle(graph);
cycle.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> findCycle(PgxGraph graph, PgxVertex<ID> src) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Find cycle looks for any loop in the graph.
This implementation tries to find a cycle in a directed graph using the given vertex as starting point for the DFS traversal and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. Restricting the DFS traversal to a single starting point means that some parts of the graph may not get explored.
The implementation of this algorithm uses a built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(4 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.src
- source vertex for the search.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxPath<Integer> cycle = analyst.findCycle(graph, vertex);
cycle.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> findCycle(PgxGraph graph, PgxVertex<ID> src, VertexSequence<ID> nodeSeq, EdgeSequence edgeSeq) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Find cycle looks for any loop in the graph.
This implementation tries to find a cycle in a directed graph using the given vertex as starting point for the DFS traversal and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. Restricting the DFS traversal to a single starting point means that some parts of the graph may not get explored.
The implementation of this algorithm uses a built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(4 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.src
- source vertex for the search.nodeSeq
- (out argument)
vertex sequence holding the vertices in the cycle.edgeSeq
- (out argument)
edge sequence holding the edges in the cycle.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> nodeSeq = graph.createVertexSequence();
EdgeSequence edgeSeq = graph.createEdgeSequence();
PgxPath<Integer> cycle = analyst.findCycle(graph, vertex, nodeSeq, edgeSeq);
cycle.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> findCycle(PgxGraph graph, VertexSequence<ID> nodeSeq, EdgeSequence edgeSeq) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Find cycle looks for any loop in the graph.
This algorithm tries to find a cycle in a directed graph using DFS traversals and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. The algorithm is expensive because it will perform DFS traversals using different vertices as starting points until it explores the whole graph (worst-case scenario), or until it finds a cycle.
The implementation of this algorithm uses a built-in DFS feature. It is an expensive algorithm to run on large graphs.
O(V * (V + E)) with V = number of vertices, E = number of edges
O(5 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.nodeSeq
- (out argument) vertex sequence holding the vertices in the cycle.edgeSeq
- (out argument) edge sequence holding the edges in the cycle.
PgxGraph graph = ...;
VertexSequence<Integer> nodeSeq = graph.createVertexSequence();
EdgeSequence edgeSeq = graph.createEdgeSequence();
PgxPath<Integer> cycle = analyst.findCycle(graph, nodeSeq, edgeSeq);
cycle.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxPath<ID>> findCycleAsync(PgxGraph graph)
Find cycle looks for any loop in the graph.
This algorithm tries to find a cycle in a directed graph using DFS traversals and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. The algorithm is expensive because it will perform DFS traversals using different vertices as starting points until it explores the whole graph (worst-case scenario), or until it finds a cycle.
The implementation of this algorithm uses a built-in DFS feature. It is an expensive algorithm to run on large graphs.
O(V * (V + E)) with V = number of vertices, E = number of edges
O(5 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<PgxPath<Integer>> promise = analyst.findCycleAsync(graph);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> findCycleAsync(PgxGraph graph, PgxVertex<ID> src)
Find cycle looks for any loop in the graph.
This implementation tries to find a cycle in a directed graph using the given vertex as starting point for the DFS traversal and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. Restricting the DFS traversal to a single starting point means that some parts of the graph may not get explored.
The implementation of this algorithm uses a built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(4 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.src
- source vertex for the search.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<PgxPath<Integer>> promise = analyst.findCycleAsync(graph, vertex);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> findCycleAsync(PgxGraph graph, PgxVertex<ID> src, VertexSequence<ID> nodeSeq, EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
This implementation tries to find a cycle in a directed graph using the given vertex as starting point for the DFS traversal and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. Restricting the DFS traversal to a single starting point means that some parts of the graph may not get explored.
The implementation of this algorithm uses a built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(4 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.src
- source vertex for the search.nodeSeq
- (out argument)
vertex sequence holding the vertices in the cycle.edgeSeq
- (out argument)
edge sequence holding the edges in the cycle.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> nodeSeq = graph.createVertexSequence();
EdgeSequence edgeSeq = graph.createEdgeSequence();
PgxFuture<PgxPath<Integer>> promise = analyst.findCycleAsync(graph, vertex, nodeSeq, edgeSeq);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> findCycleAsync(PgxGraph graph, VertexSequence<ID> nodeSeq, EdgeSequence edgeSeq)
Find cycle looks for any loop in the graph.
This algorithm tries to find a cycle in a directed graph using DFS traversals and will return the first cycle found, if there is one. In such case, the vertices and edges involved in the cycle will be returned in the order of visit. The algorithm is expensive because it will perform DFS traversals using different vertices as starting points until it explores the whole graph (worst-case scenario), or until it finds a cycle.
The implementation of this algorithm uses a built-in DFS feature. It is an expensive algorithm to run on large graphs.
O(V * (V + E)) with V = number of vertices, E = number of edges
O(5 * V + E) with V = number of vertices, E = number of edges
graph
- the graph.nodeSeq
- (out argument) vertex sequence holding the vertices in the cycle.edgeSeq
- (out argument) edge sequence holding the edges in the cycle.
PgxGraph graph = ...;
VertexSequence<Integer> nodeSeq = graph.createVertexSequence();
EdgeSequence edgeSeq = graph.createEdgeSequence();
PgxFuture<PgxPath<Integer>> promise = analyst.findCycleAsync(graph, nodeSeq, edgeSeq);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public PgxSession getSession()
public oracle.pgx.api.beta.mllib.GraphWiseConvLayerConfigBuilder graphWiseConvLayerConfigBuilder()
public oracle.pgx.api.beta.mllib.GraphWisePredictionLayerConfigBuilder graphWisePredictionLayerConfigBuilder()
public <ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> hits(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>> hits = analyst.hits(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + hits.getFirst().getName() + ", x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> hits(PgxGraph graph, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- number of iterations that will be performed.
PgxGraph graph = ...;
Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>> hits = analyst.hits(graph, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + hits.getFirst().getName() + ", x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> hits(PgxGraph graph, int max, VertexProperty<ID,java.lang.Double> auth, VertexProperty<ID,java.lang.Double> hubs) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- number of iterations that will be performed.auth
- (out argument) vertex property holding the authority score for each vertex.hubs
- (out argument) vertex property holding the hub score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> auth = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> hubs = graph.createVertexProperty(PropertyType.DOUBLE);
Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>> hits = analyst.hits(graph, 100, auth, hubs);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + hits.getFirst().getName() + ", x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>> hits(PgxGraph graph, VertexProperty<ID,java.lang.Double> auth, VertexProperty<ID,java.lang.Double> hubs) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.auth
- (out argument) vertex property holding the authority score for each vertex.hubs
- (out argument) vertex property holding the hub score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> auth = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> hubs = graph.createVertexProperty(PropertyType.DOUBLE);
Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>> hits = analyst.hits(graph, auth, hubs);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + hits.getFirst().getName() + ", x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> hitsAsync(PgxGraph graph)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>>> promise = analyst.hitsAsync(
graph);
promise.thenCompose(hits -> graph.queryPgqlAsync(
"SELECT x, x." + hits.getFirst().getName() + " x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> hitsAsync(PgxGraph graph, int max)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- number of iterations that will be performed.
PgxGraph graph = ...;
PgxFuture<Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>>> promise = analyst.hitsAsync(
graph, 100);
promise.thenCompose(hits -> graph.queryPgqlAsync(
"SELECT x, x." + hits.getFirst().getName() + " x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> hitsAsync(PgxGraph graph, int max, VertexProperty<ID,java.lang.Double> auth, VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.max
- number of iterations that will be performed.auth
- (out argument) vertex property holding the authority score for each vertex.hubs
- (out argument) vertex property holding the hub score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> auth = graph.CreateVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> hubs = graph.CreateVertexProperty(PropertyType.DOUBLE);
PgxFuture<Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>>> promise = analyst.hitsAsync(
graph, 100, auth, hubs);
promise.thenCompose(hits -> graph.queryPgqlAsync(
"SELECT x, x." + hits.getFirst().getName() + " x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Pair<VertexProperty<ID,java.lang.Double>,VertexProperty<ID,java.lang.Double>>> hitsAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> auth, VertexProperty<ID,java.lang.Double> hubs)
HITS assigns ranking scores to the vertices, aimed to assess the quality of information and references in linked structures
HITS is an algorithm that computes two ranking scores (authority and hub)for each vertex in the graph. The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific topic, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. The authority score of a vertex V is computed by adding all the hub scores of its incomming neighbors (i.e. vertices with edges pointing to V). The hub score is computed in a similar way, using the authority scores instead.
The implementation of this algorithm uses an iterative method. Both, authority and hub scores, of all the vertices in the graph are computed and updated at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.auth
- (out argument) vertex property holding the authority score for each vertex.hubs
- (out argument) vertex property holding the hub score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> auth = graph.CreateVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> hubs = graph.CreateVertexProperty(PropertyType.DOUBLE);
PgxFuture<Pair<VertexProperty<Integer, Double>, VertexProperty<Integer, Double>>> promise = analyst.hitsAsync(
graph, auth, hubs);
promise.thenCompose(hits -> graph.queryPgqlAsync(
"SELECT x, x." + hits.getFirst().getName() + " x." + hits.getSecond().getName() + " MATCH (x) ORDER BY x." +
hits.getFirst().getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Integer> inDegreeCentrality(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
In-Degree centrality returns the sum of the number of incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> degree = analyst.inDegreeCentrality(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Integer> inDegreeCentrality(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
In-Degree centrality returns the sum of the number of incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, Integer> degree = analyst.inDegreeCentrality(graph, dc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> inDegreeCentralityAsync(PgxGraph graph)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
In-Degree centrality returns the sum of the number of incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.inDegreeCentralityAsync(graph);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> inDegreeCentralityAsync(PgxGraph graph, java.lang.String propertyName)
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> inDegreeCentralityAsync(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc)
In-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
In-Degree centrality returns the sum of the number of incoming edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.inDegreeCentralityAsync(graph, dc);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public PgxMap<java.lang.Integer,java.lang.Long> inDegreeDistribution(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
In-degree distribution gives information about the incoming flows in a graph
This version of the degree distribution will return a map with the distribution of the in-degree (i.e. just incoming edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxMap<Integer, Long> degree = analyst.inDegreeDistribution(graph);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxMap<java.lang.Integer,java.lang.Long> inDegreeDistribution(PgxGraph graph, PgxMap<java.lang.Integer,java.lang.Long> distribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
In-degree distribution gives information about the incoming flows in a graph
This version of the degree distribution will return a map with the distribution of the in-degree (i.e. just incoming edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.distribution
- (out argument)
PgxGraph graph = ...;
PgxMap<Integer, Long> distribution = graph.createMap(PropertyType.INTEGER, PropertyType.LONG);
PgxMap<Integer, Long> degree = analyst.inDegreeDistribution(graph, distribution);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> inDegreeDistributionAsync(PgxGraph graph)
In-degree distribution gives information about the incoming flows in a graph
This version of the degree distribution will return a map with the distribution of the in-degree (i.e. just incoming edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<PgxMap<Integer, Long>> promise = analyst.inDegreeDistributionAsync(graph);
promise.thenAccept(map -> {
...;
});
public PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> inDegreeDistributionAsync(PgxGraph graph, PgxMap<java.lang.Integer,java.lang.Long> distribution)
In-degree distribution gives information about the incoming flows in a graph
This version of the degree distribution will return a map with the distribution of the in-degree (i.e. just incoming edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.distribution
- (out argument)
PgxGraph graph = ...;
PgxMap<Integer, Long> distribution = graph.createMap(PropertyType.INTEGER, PropertyType.LONG);
PgxFuture<PgxMap<Integer, Long>> promise = analyst.inDegreeDistributionAsync(graph, distribution);
promise.thenAccept(map -> {
...;
});
public <ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> kcore(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Pair<Scalar<Long>, VertexProperty<Integer, Long>> kcore = analyst.kcore(graph);
kcore.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> kcore(PgxGraph graph, int minCore, int maxCore) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.minCore
- minimum k-core value.maxCore
- maximum k-core value.
PgxGraph graph = ...;
Pair<Scalar<Long>, VertexProperty<Integer, Long>> kcore = analyst.kcore(graph, 0, 2147483647);
kcore.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> kcore(PgxGraph graph, int minCore, int maxCore, Scalar<java.lang.Long> maxKCore, VertexProperty<ID,java.lang.Long> kcore) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.minCore
- minimum k-core value.maxCore
- maximum k-core value.maxKCore
- Scalar (long) for holding the value of the maximum k-core found by the algorithm.kcore
- (out argument) vertex property with the largest k-core value for each vertex.
PgxGraph graph = ...;
Scalar<Long> scalar = graph.createScalar(PropertyType.LONG);
VertexProperty<Integer, Long> prop = graph.CreateVertexProperty(PropertyType.LONG);
Pair<Scalar<Long>, VertexProperty<Integer, Long>> kcore = analyst.kcore(graph, 0, 2147483647, scalar, prop);
kcore.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>> kcore(PgxGraph graph, Scalar<java.lang.Long> maxKCore, VertexProperty<ID,java.lang.Long> kcore) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.maxKCore
- Scalar (long) for holding the value of the maximum k-core found by the algorithm.kcore
- (out argument) vertex property with the largest k-core value for each vertex.
PgxGraph graph = ...;
Scalar<Long> scalar = graph.createScalar(PropertyType.LONG);
VertexProperty<Integer, Long> prop = graph.CreateVertexProperty(PropertyType.LONG);
Pair<Scalar<Long>, VertexProperty<Integer, Long>> kcore = analyst.kcore(graph, scalar, prop);
kcore.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> kcoreAsync(PgxGraph graph)
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Pair<Scalar<Long>, VertexProperty<Integer, Long>>> promise = analyst.kcoreAsync(graph);
promise.thenCompose(kcore -> {
kcore.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> kcoreAsync(PgxGraph graph, int minCore, int maxCore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.minCore
- minimum k-core value.maxCore
- maximum k-core value.
PgxGraph graph = ...;
PgxFuture<Pair<Scalar<Long>, VertexProperty<Integer, Long>>> promise = analyst.kcoreAsync(graph, 0, 2147483647);
promise.thenCompose(kcore -> {
kcore.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> kcoreAsync(PgxGraph graph, int minCore, int maxCore, Scalar<java.lang.Long> maxKCore, VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.minCore
- minimum k-core value.maxCore
- maximum k-core value.maxKCore
- Scalar (long) for holding the value of the maximum k-core found by the algorithm.kcore
- (out argument) vertex property with the largest k-core value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> prop = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Pair<Scalar<Long>, VertexProperty<Integer, Long>>> promise = analyst.kcoreAsync(
graph, 0, 2147483647, prop);
promise.thenCompose(kcore -> {
kcore.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Long>,VertexProperty<ID,java.lang.Long>>> kcoreAsync(PgxGraph graph, Scalar<java.lang.Long> maxKCore, VertexProperty<ID,java.lang.Long> kcore)
k-core decomposes a graph into layers revealing subgraphs with particular properties
A k-core is a maximal subgraph in which all of its vertices are connected and have the property that all of them have a degree of at least k. The k-cores can be regarded as layers in a graph, since a (k+1)-core will always be a subgraph of a k-core. This means that the larger k becomes, the smaller its k-core (i.e. its corresponding subgraph) will be. The k-core value (or coreness) assigned to a vertex will correspond to the core with the gratest degree from all the cores where it belongs. This implementation of k-core will look for cores lying within the interval set by the min_core and max_core input variables.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.maxKCore
- Scalar (long) for holding the value of the maximum k-core found by the algorithm.kcore
- (out argument) vertex property with the largest k-core value for each vertex.
PgxGraph graph = ...;
Scalar<Long> scalar = graph.createScalar(PropertyType.LONG);
VertexProperty<Integer, Long> prop = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Pair<Scalar<Long>, VertexProperty<Integer, Long>>> promise = analyst.kcoreAsync(
graph, 0, 2147483647, prop);
promise.thenCompose(kcore -> {
kcore.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + kcore.getSecond().getName() + " MATCH (x) ORDER BY x." + kcore.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> Pair<VertexSequence<ID>,EdgeSequence> limitedShortestPathHopDist(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,EdgeSequence> limitedShortestPathHopDist(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, VertexSequence<ID> pathVertices, EdgeSequence pathEdges) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.pathVertices
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.pathEdges
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> limitedShortestPathHopDistAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index)
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.public <ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> limitedShortestPathHopDistAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, VertexSequence<ID> pathVertices, EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.pathVertices
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.pathEdges
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.public <ID> Pair<VertexSequence<ID>,EdgeSequence> limitedShortestPathHopDistFiltered(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, EdgeFilter filter) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.filter
- filter to be evaluated on the edges when searching for a path.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,EdgeSequence> limitedShortestPathHopDistFiltered(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, EdgeFilter filter, VertexSequence<ID> pathVertices, EdgeSequence pathEdges) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.filter
- filter to be evaluated on the edges when searching for a path.pathVertices
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.pathEdges
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> limitedShortestPathHopDistFilteredAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, EdgeFilter filter)
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.filter
- filter to be evaluated on the edges when searching for a path.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.public <ID> PgxFuture<Pair<VertexSequence<ID>,EdgeSequence>> limitedShortestPathHopDistFilteredAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, int maxHops, PgxMap<java.lang.Integer,PgxVertex<ID>> highDegreeVertexMapping, VertexSet<ID> highDegreeVertices, VertexProperty<ID,PgxVect<java.lang.Integer>> index, EdgeFilter filter, VertexSequence<ID> pathVertices, EdgeSequence pathEdges)
Computes the k-hop limited shortest path between two vertices.
description: Computes the shortest path between the source and destination vertex. The algorithm only considers paths up to a length of k.
null
O(E) with E = number of edges
O(V) with V = number of vertices
graph
- the graph.src
- the source vertex.dst
- the destination vertex.maxHops
- the maximum number of edges to follow when trying to find a path.highDegreeVertexMapping
- the high-degree vertices.highDegreeVertices
- the high-degree vertices.index
- index containing distances to high-degree vertices.filter
- filter to be evaluated on the edges when searching for a path.pathVertices
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.pathEdges
- (out-argument)
will contain the vertices on the found path or will be empty if there is none.src
to dst
and the edges on the
path. Both will be empty if there is no path within maxHops
steps.public oracle.pgx.api.beta.mllib.DeepWalkModel loadDeepWalkModel(java.lang.String path, java.lang.String key) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
path
- the pathkey
- the decryption key, or null if no encryption was usedjava.lang.InterruptedException
java.util.concurrent.ExecutionException
public oracle.pgx.api.beta.mllib.Pg2vecModel loadPg2vecModel(java.lang.String path, java.lang.String key) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
path
- the pathkey
- the decryption key, or null if no encryption was usedjava.lang.InterruptedException
java.util.concurrent.ExecutionException
public oracle.pgx.api.beta.mllib.SupervisedGraphWiseModel loadSupervisedGraphWiseModel(java.lang.String path, java.lang.String key) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
path
- the pathkey
- the decryption key, or null if no encryption was usedjava.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> VertexProperty<ID,java.lang.Double> localClusteringCoefficient(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
LCC gives information about potential clustering options in a graph
The LCC of a vertex V is the fraction of connections between each pair of neighbors of V, i.e. the fraction of existing triangles from all the possible triangles involving V and every other pair of neighbor vertices of V. This implementation is intended for undirected graphs. Nodes with a degree smaller than 2 will be assigned a LCC value of 0.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V ^ 2) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> lcc = analyst.localClusteringCoefficient(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + lcc.getName() + " MATCH (x) ORDER BY x." + lcc.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> localClusteringCoefficient(PgxGraph graph, VertexProperty<ID,java.lang.Double> lcc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
LCC gives information about potential clustering options in a graph
The LCC of a vertex V is the fraction of connections between each pair of neighbors of V, i.e. the fraction of existing triangles from all the possible triangles involving V and every other pair of neighbor vertices of V. This implementation is intended for undirected graphs. Nodes with a degree smaller than 2 will be assigned a LCC value of 0.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V ^ 2) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.lcc
- (out argument)
PgxGraph graph = ...;
VertexProperty<Integer, Double> prop = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> lcc = analyst.localClusteringCoefficient(graph, prop);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + lcc.getName() + " MATCH (x) ORDER BY x." + lcc.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> localClusteringCoefficientAsync(PgxGraph graph)
LCC gives information about potential clustering options in a graph
The LCC of a vertex V is the fraction of connections between each pair of neighbors of V, i.e. the fraction of existing triangles from all the possible triangles involving V and every other pair of neighbor vertices of V. This implementation is intended for undirected graphs. Nodes with a degree smaller than 2 will be assigned a LCC value of 0.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V ^ 2) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.localClusteringCoefficientAsync(graph);
promise.thenCompose(lcc -> graph.queryPgqlAsync(
"SELECT x, x." + lcc.getName() + " MATCH (x) ORDER BY x." + lcc.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> localClusteringCoefficientAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> lcc)
LCC gives information about potential clustering options in a graph
The LCC of a vertex V is the fraction of connections between each pair of neighbors of V, i.e. the fraction of existing triangles from all the possible triangles involving V and every other pair of neighbor vertices of V. This implementation is intended for undirected graphs. Nodes with a degree smaller than 2 will be assigned a LCC value of 0.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V ^ 2) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.lcc
- (out argument)
PgxGraph graph = ...;
VertexProperty<Integer, Double> lcc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.localClusteringCoefficientAsync(graph, lcc);
promise.thenCompose(lcc -> graph.queryPgqlAsync(
"SELECT x, x." + lcc.getName() + " MATCH (x) ORDER BY x." + lcc.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Long> louvain(PgxGraph graph, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, weight);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Long> louvain(PgxGraph graph, EdgeProperty<java.lang.Double> weight, int maxIter) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.maxIter
- maximum number of iterations that will be performed during each pass.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, weight);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Long> louvain(PgxGraph graph, EdgeProperty<java.lang.Double> weight, int maxIter, int nbrPass, double tol, VertexProperty<ID,java.lang.Long> community) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.maxIter
- maximum number of iterations that will be performed during each pass.nbrPass
- number of passes that will be performed.tol
- maximum tolerated error value. The algorithm will stop once the graph's total modularity gain becomes smaller than this value.community
- the community ID assigned to each node.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, 10, weight);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Long>> louvainAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight)
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, weight);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Long>> louvainAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight, int maxIter)
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.maxIter
- maximum number of iterations that will be performed during each pass.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, weight);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Long>> louvainAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight, int maxIter, int nbrPass, double tol, VertexProperty<ID,java.lang.Long> community)
Louvain can detect communities in a large graph relatively fast.
Louvain is an algorithm for community detection in large graphs which uses the graph's modularity. Initially it assigns a different community to each node of the graph. It then iterates over the nodes and evaluates for each node the modularity gain obtained by removing the node from its community and placing it in the community of one of its neigbours. The node is placed in the community for which the modularity gain is maximum. This process is repeated for all nodes until no improvement is possible, i.e until no new assignement of a node to a different community can improve the graph's modularity.
The implementation of this algorithm uses an iterative method.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(8 * V + E) with V = number of vertices
graph
- the graph.weight
- weights of the edges of the graph.maxIter
- maximum number of iterations that will be performed during each pass.nbrPass
- number of passes that will be performed.community
- the community ID assigned to each node.
PgxGraph graph = ...;
EdgeProperty<Double> weight = graph.getEdgeProperty("cost");
VertexProperty<Integer, Long> community = analyst.louvain(graph, 10, weight);
public <ID> MatrixFactorizationModel<ID> matrixFactorizationGradientDescent(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
MatrixFactorizationModel<Integer> matrix = analyst.matrixFactorizationGradientDescent(graph, cost);
matrix.getRootMeanSquareError();
java.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> MatrixFactorizationModel<ID> matrixFactorizationGradientDescent(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, double learningRate, double changePerStep, double lambda, int maxStep, int vectorLength) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.learningRate
- learning rate for the optimization process.changePerStep
- parameter used to modulate the learning rate during the optimization process.lambda
- penalization parameter to avoid overfitting during optimization process.maxStep
- maximum number of iterations that will be performed.vectorLength
- size of the feature vectors to be generated for the factorization.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
MatrixFactorizationModel<Integer> matrix = analyst.matrixFactorizationGradientDescent(
graph, cost, 0.1, 0.9, 0.1, 100, 20);
matrix.getRootMeanSquareError();
java.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> MatrixFactorizationModel<ID> matrixFactorizationGradientDescent(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, double learningRate, double changePerStep, double lambda, int maxStep, int vectorLength, VertexProperty<ID,PgxVect<java.lang.Double>> features) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.learningRate
- learning rate for the optimization process.changePerStep
- parameter used to modulate the learning rate during the optimization process.lambda
- penalization parameter to avoid overfitting during optimization process.maxStep
- maximum number of iterations that will be performed.vectorLength
- size of the feature vectors to be generated for the factorization.features
- (out argument)
vertex property holding the generated feature vectors for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVect<Double>> features = graph.createVertexVectorProperty(PropertyType.DOUBLE, 20);
MatrixFactorizationModel<Integer> matrix = analyst.matrixFactorizationGradientDescent(
graph, cost, 0.1, 0.9, 0.1, 100, 20, features);
matrix.getRootMeanSquareError();
java.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> MatrixFactorizationModel<ID> matrixFactorizationGradientDescent(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,PgxVect<java.lang.Double>> features) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.features
- (out argument)
vertex property holding the generated feature vectors for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVect<Double>> features = graph.createVertexVectorProperty(PropertyType.DOUBLE, 20);
MatrixFactorizationModel<Integer> matrix = analyst.matrixFactorizationGradientDescent(graph, cost, features);
matrix.getRootMeanSquareError();
java.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> PgxFuture<MatrixFactorizationModel<ID>> matrixFactorizationGradientDescentAsync(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<MatrixFactorizationModel<Integer>> promise = analyst.matrixFactorizationGradientDescentAsync(
graph, cost);
promise.thenAccept(matrix -> {
matrix.getRootMeanSquareError();
});
public <ID> PgxFuture<MatrixFactorizationModel<ID>> matrixFactorizationGradientDescentAsync(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, double learningRate, double changePerStep, double lambda, int maxStep, int vectorLength)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.learningRate
- learning rate for the optimization process.changePerStep
- parameter used to modulate the learning rate during the optimization process.lambda
- penalization parameter to avoid overfitting during optimization process.maxStep
- maximum number of iterations that will be performed.vectorLength
- size of the feature vectors to be generated for the factorization.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<MatrixFactorizationModel<Integer>> promise = analyst.matrixFactorizationGradientDescentAsync(
graph, cost, 0.1, 0.9, 0.1, 100, 20);
promise.thenAccept(matrix -> {
matrix.getRootMeanSquareError();
});
public <ID> PgxFuture<MatrixFactorizationModel<ID>> matrixFactorizationGradientDescentAsync(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, double learningRate, double changePerStep, double lambda, int maxStep, int vectorLength, VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.learningRate
- learning rate for the optimization process.changePerStep
- parameter used to modulate the learning rate during the optimization process.lambda
- penalization parameter to avoid overfitting during optimization process.maxStep
- maximum number of iterations that will be performed.vectorLength
- size of the feature vectors to be generated for the factorization.features
- (out argument)
vertex property holding the generated feature vectors for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVect<Double>> features = graph.createVertexVectorProperty(PropertyType.DOUBLE, 20);
PgxFuture<MatrixFactorizationModel<Integer>> promise = analyst.matrixFactorizationGradientDescentAsync(
graph, cost, 0.1, 0.9, 0.1, 100, 20, features);
promise.thenAccept(matrix -> {
matrix.getRootMeanSquareError();
});
public <ID> PgxFuture<MatrixFactorizationModel<ID>> matrixFactorizationGradientDescentAsync(BipartiteGraph graph, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,PgxVect<java.lang.Double>> features)
Matrix factorization can be used as a recommendation algorithm for bipartite graphs
This algorithm needs a [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph to generate feature vectors that factorize the given set of left vertices (users) and right vertices (items), so that the inner product of such feature vectors can recover the information from the original graph structure, which can be seen as a sparse matrix. The generated feature vectors can be used for making recommendations with the given set of users, where a good recommendation for a given user will be a dot (inner) product between the feature vector of the user and the corresponding feature vector of a vertex from the item set, such that the result of that dot product returns a high score.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * k * s) with E = number of edges, k = maximum number of iteration, s = size of the feature vectors
O(2 * V * s) with V = number of vertices, s = size of the feature vectors
graph
- Bipartite graph.weight
- edge property holding the rating weight of each edge in the graph. The weight needs to be pre-scaled into the range 1-5. If the weight values are not between 1 and 5, the result will become inaccurate.features
- (out argument)
vertex property holding the generated feature vectors for each vertex.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVect<Double>> features = graph.createVertexVectorProperty(PropertyType.DOUBLE, 20);
PgxFuture<MatrixFactorizationModel<Integer>> promise = analyst.matrixFactorizationGradientDescentAsync(
graph, cost, features);
promise.thenAccept(matrix -> {
matrix.getRootMeanSquareError();
});
public <ID> VertexProperty<ID,java.lang.Double> matrixFactorizationRecommendations(BipartiteGraph graph, ID user, int vectorLength, VertexProperty<ID,PgxVect<java.lang.Double>> feature, VertexProperty<ID,java.lang.Double> estimatedRating) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
matrixFactorizationRecommendations(BipartiteGraph, PgxVertex, int,
VertexProperty, VertexProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> matrixFactorizationRecommendations(BipartiteGraph graph, PgxVertex<ID> user, int vectorLength, VertexProperty<ID,PgxVect<java.lang.Double>> feature, VertexProperty<ID,java.lang.Double> estimatedRating) throws java.lang.InterruptedException, java.util.concurrent.ExecutionException
Estimate rating can be used as a prediction algorithm for bipartite graphs
This algorithm is a complement for Matrix Factorization, thus it needs a bipartite graph and the generated feature vectors from such algorithm. The generated feature vectors will be used for making predictions in cases where the given user vertex has not been related to a particular item from the item set. Similarly to the recommendations from matrix factorization, this algorithm will perform dot products between the given user vertex and the rest of vertices in the graph, giving a score of 0 to the items that are already related to the user and to the products with other user vertices, hence returning the results of the dot products for the unrelated item vertices. The scores from those dot products can be interpreted as the predicted scores for the unrelated items given a particular user vertex.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- Bipartite graph.user
- vertex from the left (user) side of the graph.vectorLength
- size of the feature vectors.feature
- vertex property holding the feature vectors for each vertex.estimatedRating
- (out argument)
vertex property holding the estimated rating score for each vertex.java.lang.InterruptedException
java.util.concurrent.ExecutionException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> matrixFactorizationRecommendationsAsync(BipartiteGraph graph, PgxVertex<ID> user, int vectorLength, VertexProperty<ID,PgxVect<java.lang.Double>> feature, VertexProperty<ID,java.lang.Double> estimatedRating)
Estimate rating can be used as a prediction algorithm for bipartite graphs
This algorithm is a complement for Matrix Factorization, thus it needs a bipartite graph and the generated feature vectors from such algorithm. The generated feature vectors will be used for making predictions in cases where the given user vertex has not been related to a particular item from the item set. Similarly to the recommendations from matrix factorization, this algorithm will perform dot products between the given user vertex and the rest of vertices in the graph, giving a score of 0 to the items that are already related to the user and to the products with other user vertices, hence returning the results of the dot products for the unrelated item vertices. The scores from those dot products can be interpreted as the predicted scores for the unrelated items given a particular user vertex.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.user
- vertex from the left (user) side of the graph.vectorLength
- size of the feature vectors.feature
- vertex property holding the feature vectors for each vertex.estimatedRating
- (out argument)
vertex property holding the estimated rating score for each vertex.public <ID> VertexProperty<ID,java.lang.Integer> outDegreeCentrality(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Out-Degree centrality returns the sum of the number of outgoing edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> degree = analyst.outDegreeCentrality(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Integer> outDegreeCentrality(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Out-Degree centrality returns the sum of the number of outgoing edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, Integer> degree = analyst.outDegreeCentrality(graph, dc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> outDegreeCentralityAsync(PgxGraph graph)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Out-Degree centrality returns the sum of the number of outgoing edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.outDegreeCentralityAsync(graph);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> outDegreeCentralityAsync(PgxGraph graph, java.lang.String propertyName)
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> outDegreeCentralityAsync(PgxGraph graph, VertexProperty<ID,java.lang.Integer> dc)
Out-degree centrality measures the centrality of the vertices based on its degree, letting you see how a vertex influences its neighborhood
Out-Degree centrality returns the sum of the number of outgoing edges for each vertex in the graph.
This algorithm is designed to run in parallel given its high degree of parallelism.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.dc
- (out argument)
vertex property holding the degree centrality value for each vertex in the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> dc = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.outDegreeCentralityAsync(graph, dc);
promise.thenCompose(degree -> graph.queryPgqlAsync(
"SELECT x, x." + degree.getName() + " MATCH (x) ORDER BY x." + degree.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public PgxMap<java.lang.Integer,java.lang.Long> outDegreeDistribution(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Out-degree distribution gives information about the outgoing flows in a graph
This version of the degree distribution will return a map with the distirbution of the out-degree (i.e. just outgoing edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxMap<Integer, Long> degree = analyst.outDegreeDistribution(graph);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxMap<java.lang.Integer,java.lang.Long> outDegreeDistribution(PgxGraph graph, PgxMap<java.lang.Integer,java.lang.Long> distribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Out-degree distribution gives information about the outgoing flows in a graph
This version of the degree distribution will return a map with the distirbution of the out-degree (i.e. just outgoing edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.distribution
- (out argument)
PgxGraph graph = ...;
PgxMap<Integer, Long> distribution = graph.createMap(PropertyType.INTEGER, PropertyType.LONG);
PgxMap<Integer, Long> degree = analyst.outDegreeDistribution(graph, distribution);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> outDegreeDistributionAsync(PgxGraph graph)
Out-degree distribution gives information about the outgoing flows in a graph
This version of the degree distribution will return a map with the distirbution of the out-degree (i.e. just outgoing edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<PgxMap<Integer, Long>> promise = analyst.outDegreeDistributionAsync(graph);
promise.thenAccept(map -> {
...;
});
public PgxFuture<PgxMap<java.lang.Integer,java.lang.Long>> outDegreeDistributionAsync(PgxGraph graph, PgxMap<java.lang.Integer,java.lang.Long> distribution)
Out-degree distribution gives information about the outgoing flows in a graph
This version of the degree distribution will return a map with the distirbution of the out-degree (i.e. just outgoing edges) of the graph. For undirected graphs the algorithm will consider all the edges (incoming and outgoing) for the distribution.
This algorithm runs in a sequential way. It uses a map with type int for the keys and type long for storing the mapped values of the distribution, like a histogram.
O(V) with V = number of vertices
O(V) with V = number of vertices
graph
- the graph.distribution
- (out argument)
PgxGraph graph = ...;
PgxMap<Integer, Long> distribution = graph.createMap(PropertyType.INTEGER, PropertyType.LONG);
PgxFuture<PgxMap<Integer, Long>> promise = analyst.outDegreeDistributionAsync(graph, distribution);
promise.thenAccept(map -> {
...;
});
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, double e, double d, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, 0.001, 0.85, 100);
PgqlResultSet rs = graph.queryPgql("SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." +
pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, double e, double d, int max, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, 0.001, 0.85, 100, false);
PgqlResultSet rs = graph.queryPgql("SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." +
pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, 0.001, 0.85, 100, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, 0.001, 0.85, 100, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerank(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerank(graph, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerankApproximate(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerankApproximate(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerankApproximate(PgxGraph graph, double e, double d, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
VertexProperty<Integer, Double> pagerank = analyst.pagerankApproximate(graph, 0.001, 0.85, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerankApproximate(PgxGraph graph, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerankApproximate(graph, 0.001, 0.85, 100, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> pagerankApproximate(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.pagerankApproximate(graph, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankApproximateAsync(PgxGraph graph)
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankApproximateAsync(graph);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankApproximateAsync(PgxGraph graph, double e, double d, int max)
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankApproximateAsync(graph, 0.001, 0.85, 100);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankApproximateAsync(PgxGraph graph, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankApproximateAsync(
graph, 0.001, 0.85, 100, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankApproximateAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank)
Faster, but less accurate than pagerank. It compares and spots out important vertices in a graph
This variant of the PageRank algorithm computes the ranking scores for the vertices in similar way to the classic algorithm without normalization and with a more relaxed convergence criteria, since the tolerated error value is compared against each single vertex in the graph, instead of looking at the cumulative vertex error. Thus this variant will converge faster than the classic algorithm, but the ranking values might not be as accurate as in the classic implementation.
The implementation of this algorithm uses an iterative method. The ranking values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankApproximateAsync(graph, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, boolean norm)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, boolean norm, VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, double e, double d, int max)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, 0.001, 0.85, 100);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, double e, double d, int max, boolean norm)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, 0.001, 0.85, 100, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, 0.001, 0.85, 100, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, 0.001, 0.85, 100, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> pagerankAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> rank)
PageRank computes ranking scores based on the edges in a graph. It compares and spots out important vertices in a graph
PageRank is an algorithm that computes ranking scores for the vertices using the network created by the incoming edges in the graph. Thus it is intended for directed graphs, although undirected graphs can be treated as well by converting them into directed graphs with reciprocated edges (i.e. keeping the original edge and creating a second one going in the opposite direction). The edges on the graph will define the relevance of each vertex in the graph, reflecting this on the scores, meaning that greater scores will correspond to vertices with greater relevance.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.pagerankAsync(graph, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>> partitionConductance(PgxGraph graph, Partition<ID> partition) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Partition conductance assesses the quality of many partitions in a graph
This variant of the conductance algorithm will compute the conductance for the given number of components, returning an output with the minimun value of conductance found from the corresponding partitions and their average conductance value.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E) with E = number of edges
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Pair<Scalar<Double>, Scalar<Double>> conductance = analyst.partitionConductance(graph, partition);
conductance.getFirst().get();
conductance.getSecond().get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>> partitionConductance(PgxGraph graph, Partition<ID> partition, Scalar<java.lang.Double> avgConductance, Scalar<java.lang.Double> minConductance) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Partition conductance assesses the quality of many partitions in a graph
This variant of the conductance algorithm will compute the conductance for the given number of components, returning an output with the minimun value of conductance found from the corresponding partitions and their average conductance value.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E) with E = number of edges
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.avgConductance
- Scalar that will get updated with the computed average conductance value.minConductance
- Scalar that will get updated with the computed minimum conductance value.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> avgConductance = graph.createScalar(PropertyType.DOUBLE);
Scalar<Double> minConductance = graph.createScalar(PropertyType.DOUBLE);
Pair<Scalar<Double>, Scalar<Double>> conductance =
analyst.partitionConductance(graph, partition, avgConductance, minConductance);
conductance.getFirst().get();
conductance.getSecond().get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>>> partitionConductanceAsync(PgxGraph graph, Partition<ID> partition)
Partition conductance assesses the quality of many partitions in a graph
This variant of the conductance algorithm will compute the conductance for the given number of components, returning an output with the minimun value of conductance found from the corresponding partitions and their average conductance value.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E) with E = number of edges
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
PgxFuture<Pair<Scalar<Double>, Scalar<Double>>> promise = analyst.partitionConductanceAsync(graph, partition);
promise.thenAccept(coductance -> {
coductance.getFirst().get();
coductance.getSecond().get();
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Double>,Scalar<java.lang.Double>>> partitionConductanceAsync(PgxGraph graph, Partition<ID> partition, Scalar<java.lang.Double> avgConductance, Scalar<java.lang.Double> minConductance)
Partition conductance assesses the quality of many partitions in a graph
This variant of the conductance algorithm will compute the conductance for the given number of components, returning an output with the minimun value of conductance found from the corresponding partitions and their average conductance value.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E) with E = number of edges
O(1)
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.avgConductance
- Scalar that will get updated with the computed average conductance value.minConductance
- Scalar that will get updated with the computed minimum conductance value.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> avgConductance = graph.createScalar(PropertyType.DOUBLE);
Scalar<Double> minConductance = graph.createScalar(PropertyType.DOUBLE);
PgxFuture<Pair<Scalar<Double>, Scalar<Double>>> promise = analyst.partitionConductanceAsync(
graph, partition, avgConductance, minConductance);
promise.thenAccept(coductance -> {
coductance.getFirst().get();
coductance.getSecond().get();
});
public <ID> Scalar<java.lang.Double> partitionModularity(PgxGraph graph, Partition<ID> partition) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Modularity summarizes information about the quality of components in a graph
Modularity in a graph is a measure for assessing the quality of the partition induced by the components (or community structures) within the graph found by any clustering algorithm (e.g. label propagation, Infomap, WCC, etc.). It compares the number of the edges between the vertices within a component against the expected number of edges if these were generated at random (assuming a uniform probability distribution). A possitive modularity value means that, on average, there are more edges within the components than the amount expected (meaning stronger components), and viceversa for a negative modularity value. This implementation is intended for directed graphs.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E * c) with E = number of edges, c = number of components
O(V) with V = number of vertices
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> modularity = analyst.partitionModularity(graph, partition);
modularity.get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Scalar<java.lang.Double> partitionModularity(PgxGraph graph, Partition<ID> partition, Scalar<java.lang.Double> modularity) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Modularity summarizes information about the quality of components in a graph
Modularity in a graph is a measure for assessing the quality of the partition induced by the components (or community structures) within the graph found by any clustering algorithm (e.g. label propagation, Infomap, WCC, etc.). It compares the number of the edges between the vertices within a component against the expected number of edges if these were generated at random (assuming a uniform probability distribution). A possitive modularity value means that, on average, there are more edges within the components than the amount expected (meaning stronger components), and viceversa for a negative modularity value. This implementation is intended for directed graphs.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E * c) with E = number of edges, c = number of components
O(V) with V = number of vertices
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.modularity
- Scalar (double) to store the modularity value.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> scalar = graph.createScalar(PropertyType.DOUBLE);
Scalar<Double> modularity = analyst.partitionModularity(graph, partition, scalar);
modularity.get();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Scalar<java.lang.Double>> partitionModularityAsync(PgxGraph graph, Partition<ID> partition)
Modularity summarizes information about the quality of components in a graph
Modularity in a graph is a measure for assessing the quality of the partition induced by the components (or community structures) within the graph found by any clustering algorithm (e.g. label propagation, Infomap, WCC, etc.). It compares the number of the edges between the vertices within a component against the expected number of edges if these were generated at random (assuming a uniform probability distribution). A possitive modularity value means that, on average, there are more edges within the components than the amount expected (meaning stronger components), and viceversa for a negative modularity value. This implementation is intended for directed graphs.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E * c) with E = number of edges, c = number of components
O(V) with V = number of vertices
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
PgxFuture<Scalar<Double>> promise = analyst.partitionModularityAsync(graph, partition);
promise.thenAccept(modularity -> {
modularity.get();
});
public <ID> PgxFuture<Scalar<java.lang.Double>> partitionModularityAsync(PgxGraph graph, Partition<ID> partition, Scalar<java.lang.Double> modularity)
Modularity summarizes information about the quality of components in a graph
Modularity in a graph is a measure for assessing the quality of the partition induced by the components (or community structures) within the graph found by any clustering algorithm (e.g. label propagation, Infomap, WCC, etc.). It compares the number of the edges between the vertices within a component against the expected number of edges if these were generated at random (assuming a uniform probability distribution). A possitive modularity value means that, on average, there are more edges within the components than the amount expected (meaning stronger components), and viceversa for a negative modularity value. This implementation is intended for directed graphs.
This algorithm is designed to run in parallel given its high degree of parallelization. Note that this algorithm will be inefficient if number_of_components are big (i.e. O(N)).
O(E * c) with E = number of edges, c = number of components
O(V) with V = number of vertices
graph
- the graph.partition
- Partition of the graph with the corresponding node collections.modularity
- Scalar (double) to store the modularity value.
PgxGraph graph = ...;
Partition<Integer> partition = analyst.communitiesConductanceMinimization(graph);
Scalar<Double> scalar = graph.createScalar(PropertyType.DOUBLE);
PgxFuture<Scalar<Double>> promise = analyst.partitionModularityAsync(graph, partition, scalar);
promise.thenAccept(modularity -> {
modularity.get();
});
public <ID> PgxFuture<Scalar<java.lang.Double>> partitionModularityAsync(PgxGraph graph, Partition<ID> partition, java.lang.String modularityName)
public <ID> VertexSet<ID> periphery(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexSet<Integer> periphery = analyst.periphery(graph);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexSet<ID> periphery(PgxGraph graph, VertexSet<ID> periphery) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.periphery
- (out argument) vertex set holding the vertices from the periphery or center of the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.createVertexSet();
VertexSet<Integer> periphery = analyst.periphery(graph, vertices);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexSet<ID>> peripheryAsync(PgxGraph graph)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexSet<Integer>> promise = analyst.peripheryAsync(graph);
promise.thenAccept(periphery -> {
...
});
public <ID> PgxFuture<VertexSet<ID>> peripheryAsync(PgxGraph graph, VertexSet<ID> periphery)
Periphery/center gives an overview of the extreme distances and the corresponding vertices in a graph
The periphery of a graph is the set of vertices that have an eccentricity value equal to the diameter of the graph. Similarly, the center is comprised by the set of vertices with eccentricity equal to the radius of the graph. The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortests paths to any other vertex in the graph. This algorithm will return the set of vertices from the periphery or the center of the graph, depending on the request. The algorithm will return a set with all the vertices for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.periphery
- (out argument) vertex set holding the vertices from the periphery or center of the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.createVertexSet();
PgxFuture<VertexSet<Integer>> promise = analyst.peripheryAsync(graph, vertices);
promise.thenAccept(periphery -> {
...;
});
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
boolean)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
#personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, boolean,
VertexProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
#personalizedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
VertexProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, 0.001, 0.85, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, 0.001, 0.85, 100, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedPagerank(graph, vertex, 0.001, 0.85, 100, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, 0.001, 0.85, 100, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, PgxVertex<ID> v, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertex, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, 0.001, 0.85, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, 0.001, 0.85, 100, false);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedPagerank(graph, vertices, 0.001, 0.85, 100, false, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, 0.001, 0.85, 100, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedPagerank(PgxGraph graph, VertexSet<ID> vertices, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedPagerank(graph, vertices, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertex);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, boolean norm)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertex, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, boolean norm, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertex, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertex, 0.001, 0.85, 100);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a vertex of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the beginning.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertex, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertices);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, boolean norm)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertices, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, boolean norm, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertices, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertices, 0.001, 0.85, 100);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, false);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, false, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, VertexProperty<ID,java.lang.Double> rank)
Personalized PageRank for a set of vertices of interest. It compares and spots out important vertices in a graph
The Personalized Pagerank allows to select a particular vertex or a set of vertices from the given graph in order to give them a greater importance when computing the ranking score, which will have as result a personalized Pagerank score and reveal relevant (or similar) vertices to the ones chosen at the begining.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedPagerankAsync(graph, vertices, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, ID v, java.math.BigDecimal d, int maxIterations, java.math.BigDecimal maxDiff) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
personalizedSalsa(BipartiteGraph, PgxVertex, BigDecimal, int, BigDecimal)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, ID v, java.math.BigDecimal d, int maxIterations, java.math.BigDecimal maxDiff, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
#personalizedSalsa(BipartiteGraph, PgxVertex, BigDecimal, int, BigDecimal, VertexProperty)
taking a vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, PgxVertex<ID> v) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertex);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, PgxVertex<ID> v, double d, int maxIter, double maxDiff) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertex, 0.85, 100, 0.001);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, PgxVertex<ID> v, double d, int maxIter, double maxDiff, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertex, 0.85, 100, 0.001, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, PgxVertex<ID> v, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertex, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, VertexSet<ID> vertices) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertices);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, VertexSet<ID> vertices, double d, int maxIter, double maxDiff) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertices, 0.85, 100, 0.001);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, VertexSet<ID> vertices, double d, int maxIter, double maxDiff, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertices, 0.85, 100, 0.001, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedSalsa(BipartiteGraph graph, VertexSet<ID> vertices, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.personalizedSalsa(graph, vertices, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, PgxVertex<ID> v)
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(graph, vertex);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, PgxVertex<ID> v, double d, int maxIter, double maxDiff)
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(
graph, vertex, 0.85, 100, 0.001);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, PgxVertex<ID> v, double d, int maxIter, double maxDiff, VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(
graph, vertex, 0.85, 100, 0.001, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, PgxVertex<ID> v, VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a vertex of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.v
- the chosen vertex from the graph for personalization.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(graph, vertex, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, VertexSet<ID> vertices)
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(graph, vertices);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, VertexSet<ID> vertices, double d, int maxIter, double maxDiff)
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(
graph, vertices, 0.85, 100, 0.001);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, VertexSet<ID> vertices, double d, int maxIter, double maxDiff, VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.d
- damping factor to modulate the degree of personalization of the scores by the algorithm.maxIter
- maximum number of iterations that will be performed.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(
graph, vertices, 0.85, 100, 0.001, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedSalsaAsync(BipartiteGraph graph, VertexSet<ID> vertices, VertexProperty<ID,java.lang.Double> salsaRank)
Personalized salsa for a set of vertices of interest. It assesses the quality of information and references in linked structures
This Personalized version of SALSA allows to select a particular vertex or set of vertices from the given graph in order to give them a greater importance when computing the ranking scores, which will have as result a personalized SALSA score and show relevant (or similar) vertices to the ones chosen for the personalization.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- Bipartite graph.vertices
- the set of chosen vertices from the graph for personalization.salsaRank
- (out argument)
vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedSalsaAsync(graph, vertices, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
boolean, EdgeProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
#personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, boolean, EdgeProperty,
VertexProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int,
EdgeProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, ID vertexId, java.math.BigDecimal e, java.math.BigDecimal d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
#personalizedWeightedPagerank(PgxGraph, PgxVertex, BigDecimal, BigDecimal, int, EdgeProperty,
VertexProperty)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertex, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertex, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertex, 0.001, 0.85, 100, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertex, 0.001, 0.85, 100, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertex, 0.001, 0.85, 100, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertex, 0.001, 0.85, 100, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertex, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, PgxVertex<ID> v, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertex, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertices, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertices, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertices, 0.001, 0.85, 100, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertices, 0.001, 0.85, 100, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertices, 0.001, 0.85, 100, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank =
analyst.personalizedWeightedPagerank(graph, vertices, 0.001, 0.85, 100, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertices, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> personalizedWeightedPagerank(PgxGraph graph, VertexSet<ID> vertices, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.personalizedWeightedPagerank(graph, vertices, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, boolean norm, EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, 0.001, 0.85, 100, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, EdgeProperty<java.lang.Double> weight)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, PgxVertex<ID> v, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized weighted pagerank for a vertex and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.v
- the chosen vertex from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) weighted PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertex, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, boolean norm, EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, 0.001, 0.85, 100, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, EdgeProperty<java.lang.Double> weight)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> personalizedWeightedPagerankAsync(PgxGraph graph, VertexSet<ID> vertices, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
Personalized pagerank for a set of vertices and weighted edges. It compares and spots out important vertices in a graph
The Personalized Weighted Pagerank combines elements from the weighted and the personalized versions in order to make the personalization of the results more unique, since both: the selection of a subset of vertices and the inclusion of specific weights in the edges, will help to set the importance of the ranking scores when these are being computed.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(3 * V) with V = number of vertices
graph
- the graph.vertices
- the set of chosen vertices from the graph for personalization.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument)
vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexSet<Integer> vertices = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.personalizedWeightedPagerankAsync(
graph, vertices, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public oracle.pgx.api.beta.mllib.Pg2vecModelBuilder pg2vecModelBuilder()
public EdgeProperty<java.lang.Boolean> prim(PgxGraph graph, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
prim reveals tree structures with shortest paths in a graph
This implementation of Prim's algorithm works on undirected graphs that are connected and have no multi-edges (i.e. more than one edge connecting the same pair of vertices). The algorithm computes the minimum spanning tree (MST) of the graph using the weights associated to each edge. A minimum spanning tree is a subset of the edges that connects all the vertices in the graph such that it minimizes the total weight assiciated to the edges.
The implementation of this algorithm uses an iterative method.
O(E + V log V) with V = number of vertices, E = number of edges
O(2 * E + V) with V = number of vertices, E = number of edges
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeProperty<Boolean> prim = analyst.prim(graph, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + prim.getName() + " MATCH (x) ORDER BY x." + prim.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public EdgeProperty<java.lang.Boolean> prim(PgxGraph graph, EdgeProperty<java.lang.Double> weight, EdgeProperty<java.lang.Boolean> mst) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
prim reveals tree structures with shortest paths in a graph
This implementation of Prim's algorithm works on undirected graphs that are connected and have no multi-edges (i.e. more than one edge connecting the same pair of vertices). The algorithm computes the minimum spanning tree (MST) of the graph using the weights associated to each edge. A minimum spanning tree is a subset of the edges that connects all the vertices in the graph such that it minimizes the total weight assiciated to the edges.
The implementation of this algorithm uses an iterative method.
O(E + V log V) with V = number of vertices, E = number of edges
O(2 * E + V) with V = number of vertices, E = number of edges
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.mst
- (out argument) edge property holding the edges belonging to the minimum spanning tree of the graph (i.e. all the edges with in_mst=true).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeProperty<Boolean> mst = graph.createEdgeProperty(PropertyType.BOOLEAN);
EdgeProperty<Boolean> prim = analyst.prim(graph, cost, mst);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + prim.getName() + " MATCH (x) ORDER BY x." + prim.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public EdgeProperty<java.lang.Boolean> prim(PgxGraph graph, EdgeProperty<java.lang.Double> weight, java.lang.String mstName) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
prim(PgxGraph, EdgeProperty, String)
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public PgxFuture<EdgeProperty<java.lang.Boolean>> primAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight)
prim reveals tree structures with shortest paths in a graph
This implementation of Prim's algorithm works on undirected graphs that are connected and have no multi-edges (i.e. more than one edge connecting the same pair of vertices). The algorithm computes the minimum spanning tree (MST) of the graph using the weights associated to each edge. A minimum spanning tree is a subset of the edges that connects all the vertices in the graph such that it minimizes the total weight assiciated to the edges.
The implementation of this algorithm uses an iterative method.
O(E + V log V) with V = number of vertices, E = number of edges
O(2 * E + V) with V = number of vertices, E = number of edges
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<EdgeProperty<Boolean>> promise = analyst.primAsync(graph, cost);
promise.thenCompose(prim -> graph.queryPgqlAsync(
"SELECT x, x." + prim.getName() + " MATCH (x) ORDER BY x." + prim.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public PgxFuture<EdgeProperty<java.lang.Boolean>> primAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight, EdgeProperty<java.lang.Boolean> mst)
prim reveals tree structures with shortest paths in a graph
This implementation of Prim's algorithm works on undirected graphs that are connected and have no multi-edges (i.e. more than one edge connecting the same pair of vertices). The algorithm computes the minimum spanning tree (MST) of the graph using the weights associated to each edge. A minimum spanning tree is a subset of the edges that connects all the vertices in the graph such that it minimizes the total weight assiciated to the edges.
The implementation of this algorithm uses an iterative method.
O(E + V log V) with V = number of vertices, E = number of edges
O(2 * E + V) with V = number of vertices, E = number of edges
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.mst
- (out argument) edge property holding the edges belonging to the minimum spanning tree of the graph (i.e. all the edges with in_mst=true).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeProperty<Boolean> mst = graph.createEdgeProperty(PropertyType.BOOLEAN);
PgxFuture<EdgeProperty<Boolean>> promise = analyst.primAsync(graph, cost, mst);
promise.thenCompose(prim -> graph.queryPgqlAsync(
"SELECT x, x." + prim.getName() + " MATCH (x) ORDER BY x." + prim.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> radius(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Pair<Scalar<Integer>, VertexProperty<Integer, Integer>> diameter = analyst.radius(graph);
radius.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + radius.getSecond().getName() + " MATCH (x) ORDER BY x." + radius.getSecond().getName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>> radius(PgxGraph graph, Scalar<java.lang.Integer> radius, VertexProperty<ID,java.lang.Integer> eccentricity) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.radius
- Scalar (integer) for holding the value of the radius of the graph.eccentricity
- (out argument) vertex property holding the eccentricity value for each vertex.
PgxGraph graph = ...;
Scalar<Integer> scalar = graph.createScalar(PropertyType.INTEGER);
VertexProperty<Integer, Integer> prop = graph.CreateVertexProperty(PropertyType.INTEGER);
Pair<Scalar<Integer>, VertexProperty<Integer, Integer>> diameter = analyst.radius(graph, scalar, prop);
radius.getFirst().get();
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + radius.getSecond().getName() + " MATCH (x) ORDER BY x." + radius.getSecond().getName() +
" DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> radiusAsync(PgxGraph graph)
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Pair<Scalar<Integer>, VertexProperty<Integer, Integer>>> promise = analyst.radiusAsync(graph);
promise.thenCompose(radius -> {
radius.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + radius.getSecond().getName() + " MATCH (x) ORDER BY x." + radius.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxFuture<Pair<Scalar<java.lang.Integer>,VertexProperty<ID,java.lang.Integer>>> radiusAsync(PgxGraph graph, Scalar<java.lang.Integer> radius, VertexProperty<ID,java.lang.Integer> eccentricity)
Diameter/radius gives an overview of the distances in a graph
The diameter of a graph is the maximal value of eccentricity of all the vertices in the graph, while the radius is the minimum graph eccentricity. The eccentricity of a vertex is the maximum distance via shortest paths to any other vertex in the graph. This algorithm will compute the eccentricity of all the vertices and will also return the diameter or radius value depending on the request. The algorithm will return an INF eccentricity and diameter/radius, for graphs with more than one strongly connected component.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortest paths. It still is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(V) with V = number of vertices
graph
- the graph.radius
- Scalar (integer) for holding the value of the radius of the graph.eccentricity
- (out argument) vertex property holding the eccentricity value for each vertex.
PgxGraph graph = ...;
Scalar<Integer> scalar = graph.createScalar(PropertyType.INTEGER);
VertexProperty<Integer, Integer> prop = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<Pair<Scalar<Integer>, VertexProperty<Integer, Integer>>> promise = analyst.radiusAsync(
graph, scalar, prop);
promise.thenCompose(radius -> {
radius.getFirst().get();
graph.queryPgqlAsync(
"SELECT x, x." + radius.getSecond().getName() + " MATCH (x) ORDER BY x." + radius.getSecond().getName() +
" DESC"))
.thenAccept(PgqlResultSet::print);
});
public <ID> PgxMap<PgxVertex<ID>,java.lang.Integer> randomWalkWithRestart(PgxGraph graph, ID source, int length, java.math.BigDecimal resetProb, PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
randomWalkWithRestart(PgxGraph, PgxVertex, int, double,
PgxMap)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxMap<PgxVertex<ID>,java.lang.Integer> randomWalkWithRestart(PgxGraph graph, PgxVertex<ID> source, int length, double resetProb, PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
random walk with restart does the what its name says, it can find approximate stationary distributions
This algorithm performs a random walk over the graph. The walk will start at the given source vertex and will randomly visit neighboring vertices in the graph, with a probability equal to the value of reset_probability of going back to the starting point. The random walk will also go back to the starting point every time it reaches a vertex with no outgoing edges. The algorithm will stop once it reaches the specified walk lenght.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(L) with L = length of the random walk
O(V) with V = number of vertices
graph
- the graph.source
- (in argument)
starting point of the random walk.length
- (in argument)
length (number of steps) of the random walk.resetProb
- (in argument)
probability value for resetting the random walk.visitCount
- (out argument)
map holding the number of visits during the random walk for each vertex in the graph.
PgxGraph graph = ...;
PgxMap<PgxVertex, Integer> visitCount = graph.createMap(PropertyType.VERTEX, PropertyType.INTEGER);
PgxMap<PgxVertex, Integer> count = analyst.randomWalkWithRestart(graph, 0, 100, 0,6, visit_count);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxMap<PgxVertex<ID>,java.lang.Integer>> randomWalkWithRestartAsync(PgxGraph graph, PgxVertex<ID> source, int length, double resetProb, PgxMap<PgxVertex<ID>,java.lang.Integer> visitCount)
random walk with restart does the what its name says, it can find approximate stationary distributions
This algorithm performs a random walk over the graph. The walk will start at the given source vertex and will randomly visit neighboring vertices in the graph, with a probability equal to the value of reset_probability of going back to the starting point. The random walk will also go back to the starting point every time it reaches a vertex with no outgoing edges. The algorithm will stop once it reaches the specified walk lenght.
The implementation of this algorithm uses an iterative method. Since the algorithm visits the vertices in a random order on each iteration it is non-deterministic.
O(L) with L = length of the random walk
O(V) with V = number of vertices
graph
- the graph.source
- (in argument)
starting point of the random walk.length
- (in argument)
length (number of steps) of the random walk.resetProb
- (in argument)
probability value for resetting the random walk.visitCount
- (out argument)
map holding the number of visits during the random walk for each vertex in the graph.
PgxGraph graph = ...;
PgxMap<PgxVertex, Integer> visit_count;
PgxFuture<PgxMap<PgxVertex, Integer>> promise = analyst.randomWalkWithRestart(graph, 0, 100, 0,6, visit_count);
public <ID> java.lang.Integer reachability(PgxGraph graph, PgxVertex<ID> source, PgxVertex<ID> dest, int maxHops, boolean ignoreEdgeDirection) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Reachability is a fast way to check if two vertices are reachable from each other.
This algorithm tries to find if the destination vertex is reachable given the source vertex and the maximum hop distance set by the user. The search can be performed in a directed or undirected way. These options may lead to different hop distances, since an undirected search has less restrictions on the possible paths connecting vertices than the directed option. Hence hop distances from an undirected search can be smaller than the ones from the directed cases.
The implementation of this algorithm uses a built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(1)
graph
- the graph.source
- source vertex for the search.dest
- destination vertex for the search.maxHops
- maximum hop distance between the source and destination vertices.ignoreEdgeDirection
- boolean flag for ignoring the direction of the edges during the search.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
integer result = analyst.reachability(graph, src, dst, 2, false);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<java.lang.Integer> reachabilityAsync(PgxGraph graph, PgxVertex<ID> source, PgxVertex<ID> dest, int maxHops, boolean ignoreEdgeDirection)
Reachability is a fast way to check if two vertices are reachable from each other.
This algorithm tries to find if the destination vertex is reachable given the source vertex and the maximum hop distance set by the user. The search can be performed in a directed or undirected way. These options may lead to different hop distances, since an undirected search has less restrictions on the possible paths connecting vertices than the directed option. Hence hop distances from an undirected search can be smaller than the ones from the directed cases.
The implementation of this algorithm uses a built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(1)
graph
- the graph.source
- source vertex for the search.dest
- destination vertex for the search.maxHops
- maximum hop distance between the source and destination vertices.ignoreEdgeDirection
- boolean flag for ignoring the direction of the edges during the search.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
PgxFuture<Integer> promise = analyst.reachabilityAsync(graph, src, dst, 2, false);
promise.thenAccept(result -> {
...;
});
public <ID> VertexProperty<ID,java.lang.Double> salsa(BipartiteGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> salsa = analyst.salsa(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> salsa(BipartiteGraph graph, double maxDiff, int maxIter) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
VertexProperty<Integer, Double> salsa = analyst.salsa(graph, 0.001, 100);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> salsa(BipartiteGraph graph, double maxDiff, int maxIter, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.salsaRank
- (out argument) vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.salsa(graph, 0.001, 100, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> salsa(BipartiteGraph graph, VertexProperty<ID,java.lang.Double> salsaRank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.salsaRank
- (out argument) vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> salsa = analyst.salsa(graph, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> salsaAsync(BipartiteGraph graph)
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.salsaAsync(graph);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> salsaAsync(BipartiteGraph graph, double maxDiff, int maxIter)
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.salsaAsync(graph, 0.001, 100);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> salsaAsync(BipartiteGraph graph, double maxDiff, int maxIter, VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.maxDiff
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.maxIter
- maximum number of iterations that will be performed.salsaRank
- (out argument) vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(Property.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.salsaAsync(graph, 0.001, 100, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> salsaAsync(BipartiteGraph graph, VertexProperty<ID,java.lang.Double> salsaRank)
SALSA computes ranking scores. It assesses the quality of information and references in linked structures
The idea of hubs and authorites comes from the web pages: a hub is regarded as a page that is not authoritative in a specific matter, but it has instead links to authority pages, which are regarded as meaningful sources for a particular topic by many hubs. Thus a good hub will point to many authorities, while a good authority will be pointed by many hubs. SALSA is an algorithm that computes authorities and hubs ranking scores for the vertices using the network created by the edges of the [bipartite](prog-guides/mutation-subgraph/subgraph.html#create-a-bipartite-subgraph-based-on-a-vertex-list) graph and assigning weights to the contributions of their 2nd-degree neighbors. This way of computing the scores creates the independence between the authority and hub scores, which are assigned to the vertices depending on the side of the graph they belong (left:hub / right:aut).
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- Bipartite graph.salsaRank
- (out argument) vertex property holding the normalized authority/hub ranking score for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(Property.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.salsaAsync(graph, rank);
promise.thenCompose(salsa -> graph.queryPgqlAsync(
"SELECT x, x." + salsa.getName() + " MATCH (x) ORDER BY x." + salsa.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Partition<ID> sccKosaraju(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Kosaraju finds strongly connected components in a graph
Kosaraju's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS and BFS features.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Partition<Integer> scc = analyst.sccKosaraju(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> sccKosaraju(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Kosaraju finds strongly connected components in a graph
Kosaraju's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS and BFS features.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the SCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> scc = analyst.sccKosaraju(graph, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> sccKosarajuAsync(PgxGraph graph)
Kosaraju finds strongly connected components in a graph
Kosaraju's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS and BFS features.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.sccKosarajuAsync(graph);
promise.thenCompose(scc -> graph.queryPgqlAsync(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> sccKosarajuAsync(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution)
Kosaraju finds strongly connected components in a graph
Kosaraju's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS and BFS features.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the SCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.sccKosarajuAsync(graph, pd);
promise.thenCompose(scc -> graph.queryPgqlAsync(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Partition<ID> sccTarjan(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Tarjan finds strongly connected components in a graph
Tarjan's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(5 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Partition<Integer> scc = analyst.sccTarjan(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> sccTarjan(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Tarjan finds strongly connected components in a graph
Tarjan's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(5 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the SCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> scc = analyst.sccTarjan(graph, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> sccTarjanAsync(PgxGraph graph)
Tarjan finds strongly connected components in a graph
Tarjan's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(5 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.sccTarjanAsync(graph);
promise.thenCompose(scc -> graph.queryPgqlAsync(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> sccTarjanAsync(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitonDistribution)
Tarjan finds strongly connected components in a graph
Tarjan's algorithm works on directed graphs for finding strongly connected components (SCC). A SCC is a maximal subset of vertices of the graph with the particular characteristic that every vertex in the SCC can be reachable from any other other vertex in the SCC.
The implementation of this algorithm uses the built-in DFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(5 * V) with V = number of vertices
graph
- the graph.partitonDistribution
- vertex property holding the label of the SCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.sccTarjanAsync(graph, pd);
promise.thenCompose(scc -> graph.queryPgqlAsync(
"SELECT x, x." + scc.getPropertyName() + " MATCH (x) ORDER BY x." + scc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> AllPaths<ID> shortestPathBellmanFord(PgxGraph graph, ID srcId, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathBellmanFord(PgxGraph, PgxVertex, EdgeProperty)
taking a vertex ID
instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFord(PgxGraph graph, ID srcId, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathBellmanFord(PgxGraph, PgxVertex, EdgeProperty, VertexProperty,
VertexProperty, VertexProperty)
taking a vertex ID
instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFord(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bellman-ford finds multiple shortest paths at the same time
Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.cost
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
AllPaths<Integer> paths = analyst.shortestPathBellmanFord(graph, src, cost);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFord(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bellman-ford finds multiple shortest paths at the same time
Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.cost
- edge property holding the weight of each edge in the graph.distance
- (out argument)
vertex property holding the distance to the source vertex for each vertex in the graph.parent
- (out argument)
vertex property holding the parent vertex of the each vertex in the shortest path.parentEdge
- (out argument)
vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
AllPaths<Integer> paths = analyst.shortestPathBellmanFord(graph, src, cost, distance, parent, parentEdge);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<AllPaths<ID>> shortestPathBellmanFordAsync(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost)
Bellman-ford finds multiple shortest paths at the same time
Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.cost
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathBellmanFordAsync(graph, src, cost);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxFuture<AllPaths<ID>> shortestPathBellmanFordAsync(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Bellman-ford finds multiple shortest paths at the same time
Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.cost
- edge property holding the weight of each edge in the graph.distance
- (out argument)
vertex property holding the distance to the source vertex for each vertex in the graph.parent
- (out argument)
vertex property holding the parent vertex of the each vertex in the shortest path.parentEdge
- (out argument)
vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathBellmanFordAsync(
graph, src, cost, distance, parent, parentEdge);
promise.thenAccept(paths -> {
...;
});
public <ID> AllPaths<ID> shortestPathBellmanFordReverse(PgxGraph graph, ID srcId, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathBellmanFordReverse(PgxGraph, PgxVertex, EdgeProperty)
taking a
vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFordReverse(PgxGraph graph, ID srcId, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathBellmanFordReverse(PgxGraph, PgxVertex, EdgeProperty,
VertexProperty, VertexProperty, VertexProperty)
taking a
vertex ID instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFordReverse(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Reversed bellman-ford finds multiple shortest paths at the same time
This variant of the Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices in a reversed fashion using the incoming edges instead of the outgoing, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- src
- cost
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
AllPaths<Integer> paths = analyst.shortestPathBellmanFordReverse(graph, src, cost);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathBellmanFordReverse(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Reversed bellman-ford finds multiple shortest paths at the same time
This variant of the Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices in a reversed fashion using the incoming edges instead of the outgoing, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- src
- cost
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
AllPaths<Integer> paths = analyst.shortestPathBellmanFordReverse(graph, src, cost, distance, parent, parentEdge);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<AllPaths<ID>> shortestPathBellmanFordReverseAsync(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost)
Reversed bellman-ford finds multiple shortest paths at the same time
This variant of the Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices in a reversed fashion using the incoming edges instead of the outgoing, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- src
- cost
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathBellmanFordReverseAsync(graph, src, cost);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxFuture<AllPaths<ID>> shortestPathBellmanFordReverseAsync(PgxGraph graph, PgxVertex<ID> src, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Reversed bellman-ford finds multiple shortest paths at the same time
This variant of the Bellman-Ford algorithm tries to find the shortest path (if there is one) between the given source and destination vertices in a reversed fashion using the incoming edges instead of the outgoing, while minimizing the distance or cost associated to each edge in the graph.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(V + E) with V = number of vertices, E = number of edges
O(6 * V) with V = number of vertices
graph
- src
- cost
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathBellmanFordReverseAsync(
graph, src, cost, distance, parent, parentEdge);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxPath<ID> shortestPathDijkstra(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking
vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstra(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking
vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstra(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Dijkstra is a fast algorithm for finding a shortest path in a graph
Dijkstra's algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.dst
- the destination vertex from the graph for the path.cost
- edge property holding the (positive) weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxPath<Integer> path = analyst.shortestPathDijkstra(graph, src, dst, cost);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstra(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Dijkstra is a fast algorithm for finding a shortest path in a graph
Dijkstra's algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.dst
- the destination vertex from the graph for the path.cost
- edge property holding the (positive) weight of each edge in the graph.parent
- (out argument)
vertex property holding the parent vertex of the each vertex in the shortest path.parentEdge
- (out argument)
vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxPath<Integer> path = analyst.shortestPathDijkstra(graph, src, dst, cost, parent, parentEdge);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxPath<ID>> shortestPathDijkstraAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost)
Dijkstra is a fast algorithm for finding a shortest path in a graph
Dijkstra's algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.dst
- the destination vertex from the graph for the path.cost
- edge property holding the (positive) weight of each edge in the graph.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathDijkstraAsync(graph, src, dst, cost);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> shortestPathDijkstraAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Dijkstra is a fast algorithm for finding a shortest path in a graph
Dijkstra's algorithm tries to find the shortest path (if there is one) between the given source and destination vertices, while minimizing the distance or cost associated to each edge in the graph.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- the graph.src
- the source vertex from the graph for the path.dst
- the destination vertex from the graph for the path.cost
- edge property holding the (positive) weight of each edge in the graph.parent
- (out argument)
vertex property holding the parent vertex of the each vertex in the shortest path.parentEdge
- (out argument)
vertex property holding the edge ID linking the current vertex in the path with the previous vertex in the path.
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathDijkstraAsync(graph, src, dst, cost, parent, parentEdge);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxPath<ID> shortestPathDijkstraBidirectional(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstraBidirectional(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty)
taking vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstraBidirectional(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxPath<Integer> path = analyst.shortestPathDijkstraBidirectional(graph, src, dst, cost);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathDijkstraBidirectional(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxPath<Integer> path = analyst.shortestPathDijkstraBidirectional(graph, src, dst, cost, parent, parentEdge);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxPath<ID>> shortestPathDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathDijkstraBidirectionalAsync(graph, src, dst, cost);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> shortestPathDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, java.lang.String parentName, java.lang.String parentEdgeName)
public <ID> PgxFuture<PgxPath<ID>> shortestPathDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path in a graph
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathDijkstraBidirectionalAsync(
graph, src, dst, cost, parent, parentEdge);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxPath<ID> shortestPathFilteredDijkstra(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathFilteredDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs
instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstra(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathFilteredDijkstra(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs
instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstra(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm tries to find the shortest path while also taking into account a filter expression, which will add restrictions over the potential edges when looking for the shortest path between the source and destination vertices.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
PgxPath<Integer> path = analyst.shortestPathFilteredDijkstra(graph, src, dst, cost, filter);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstra(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm tries to find the shortest path while also taking into account a filter expression, which will add restrictions over the potential edges when looking for the shortest path between the source and destination vertices.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
- parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxPath<Integer> path = analyst.shortestPathFilteredDijkstra(graph, src, dst, cost, filter, parent, parentEdge);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxPath<ID>> shortestPathFilteredDijkstraAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm tries to find the shortest path while also taking into account a filter expression, which will add restrictions over the potential edges when looking for the shortest path between the source and destination vertices.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathFilteredDijkstraAsync(graph, src, dst, cost, filter);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> shortestPathFilteredDijkstraAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Filtered Dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm tries to find the shortest path while also taking into account a filter expression, which will add restrictions over the potential edges when looking for the shortest path between the source and destination vertices.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(4 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
- parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathFilteredDijkstraAsync(
graph, src, dst, cost, filter, parent, parentEdge);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxPath<ID> shortestPathFilteredDijkstraBidirectional(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathFilteredDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstraBidirectional(PgxGraph graph, ID srcId, ID dstId, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathFilteredDijkstraBidirectional(PgxGraph, PgxVertex, PgxVertex, EdgeProperty, GraphFilter)
taking vertex IDs instead of PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstraBidirectional(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex, while also adding the corresponding restrictions on the edges given by the filter expression. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
PgxPath<Integer> path = analyst.shortestPathFilteredDijkstraBidirectional(graph, src, dst, cost, filter);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxPath<ID> shortestPathFilteredDijkstraBidirectional(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex, while also adding the corresponding restrictions on the edges given by the filter expression. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
- parent
- (out argument)
parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxPath<Integer> path = analyst.shortestPathFilteredDijkstraBidirectional(
graph, src, dst, cost, filter, parent, parentEdge);
path.getPathLengthWithCost();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<PgxPath<ID>> shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex, while also adding the corresponding restrictions on the edges given by the filter expression. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathFilteredDijkstraBidirectionalAsync(
graph, src, dst, cost, filter);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> PgxFuture<PgxPath<ID>> shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, java.lang.String parentName, java.lang.String parentEdgeName)
public <ID> PgxFuture<PgxPath<ID>> shortestPathFilteredDijkstraBidirectionalAsync(PgxGraph graph, PgxVertex<ID> src, PgxVertex<ID> dst, EdgeProperty<java.lang.Double> cost, GraphFilter filterExpr, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Bidirectional dijkstra is a fast algorithm for finding a shortest path while also filtering edges
This variant of the Dijkstra's algorithm searches for shortest path in two ways, it does a forward search from the source vertex and a backwards one from the destination vertex, while also adding the corresponding restrictions on the edges given by the filter expression. If the path between the vertices exists, both searches will meet each other at an intermediate point.
This algorithm runs in a sequential way.
O(E + V log V) with V = number of vertices, E = number of edges
O(10 * V) with V = number of vertices
graph
- src
- dst
- cost
- filterExpr
- parent
- (out argument)
parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxVertex<Integer> dst = graph.getVertex(333);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
EdgeFilter filter = EdgeFilter.fromExpression("edge.cost > 5");
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<PgxPath<Integer>> promise = analyst.shortestPathFilteredDijkstraBidirectionalAsync(
graph, src, dst, cost, filter, parent, parentEdge);
promise.thenAccept(path -> {
path.getPathLengthWithCost();
});
public <ID> AllPaths<ID> shortestPathHopDist(PgxGraph graph, ID srcId) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathHopDist(PgxGraph, PgxVertex)
taking a vertex ID instead of
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDist(PgxGraph graph, ID srcId, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathHopDist(PgxGraph, PgxVertex, VertexProperty, VertexProperty,
VertexProperty)
taking a vertex ID instead of
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDist(PgxGraph graph, PgxVertex<ID> src) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each vertex with respect to the given source vertex in the input and will also return the parent vertex and linking edge for each vertex. The returned information allows to trace back shortest paths from any reachable vertex to the source vertex.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
AllPaths<Integer> paths = analyst.shortestPathHopDist(graph, src);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDist(PgxGraph graph, PgxVertex<ID> src, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each vertex with respect to the given source vertex in the input and will also return the parent vertex and linking edge for each vertex. The returned information allows to trace back shortest paths from any reachable vertex to the source vertex.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
AllPaths<Integer> paths = analyst.shortestPathHopDist(graph, src, distance, parent, parentEdge);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<AllPaths<ID>> shortestPathHopDistAsync(PgxGraph graph, PgxVertex<ID> src)
Hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each vertex with respect to the given source vertex in the input and will also return the parent vertex and linking edge for each vertex. The returned information allows to trace back shortest paths from any reachable vertex to the source vertex.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathHopDistAsync(graph, src);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxFuture<AllPaths<ID>> shortestPathHopDistAsync(PgxGraph graph, PgxVertex<ID> src, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each vertex with respect to the given source vertex in the input and will also return the parent vertex and linking edge for each vertex. The returned information allows to trace back shortest paths from any reachable vertex to the source vertex.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathHopDistAsync(graph, src, distance, parent, parentEdge);
promise.thenAccept(paths -> {
...;
});
public <ID> AllPaths<ID> shortestPathHopDistReverse(PgxGraph graph, ID srcId) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathHopDistReverse(PgxGraph, PgxVertex)
taking a vertex ID instead of
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDistReverse(PgxGraph graph, ID srcId, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
shortestPathHopDistReverse(PgxGraph, PgxVertex, VertexProperty, VertexProperty,
VertexProperty)
taking a vertex ID instead of
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDistReverse(PgxGraph graph, PgxVertex<ID> src) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Backwards hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each node with respect to the given source node in the input and will also return the parent node and linking edge for each node. The returned information allows to trace back shortest paths from any reachable node to the source node.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
AllPaths<Integer> paths = analyst.shortestPathHopDistReverse(graph, src);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> AllPaths<ID> shortestPathHopDistReverse(PgxGraph graph, PgxVertex<ID> src, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Backwards hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each node with respect to the given source node in the input and will also return the parent node and linking edge for each node. The returned information allows to trace back shortest paths from any reachable node to the source node.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
AllPaths<Integer> paths = analyst.shortestPathHopDistReverse(graph, src, distance, parent, parentEdge);
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<AllPaths<ID>> shortestPathHopDistReverseAsync(PgxGraph graph, PgxVertex<ID> src)
Backwards hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each node with respect to the given source node in the input and will also return the parent node and linking edge for each node. The returned information allows to trace back shortest paths from any reachable node to the source node.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
-
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathHopDistReverseAsync(graph, src);
promise.thenAccept(paths -> {
...;
});
public <ID> PgxFuture<AllPaths<ID>> shortestPathHopDistReverseAsync(PgxGraph graph, PgxVertex<ID> src, VertexProperty<ID,java.lang.Double> distance, VertexProperty<ID,PgxVertex<ID>> parent, VertexProperty<ID,PgxEdge> parentEdge)
Backwards hop distance can give a relatively fast insight on the distances in a graph
The Hop distance of two vertices S and V in a graph is the number of edges that are in a shortest path connecting them. This algorithm will return the distance of each node with respect to the given source node in the input and will also return the parent node and linking edge for each node. The returned information allows to trace back shortest paths from any reachable node to the source node.
The implementation of this algorithm uses the built-in BFS feature.
O(V + E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- src
- distance
- (out argument) parent
- (out argument) parentEdge
- (out argument)
PgxGraph graph = ...;
PgxVertex<Integer> src = graph.getVertex(128);
VertexProperty<Integer, Double> distance = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, PgxVertex<Integer>> parent = graph.createVertexProperty(PropertyType.VERTEX);
VertexProperty<Integer, PgxEdge> parentEdge = graph.createVertexProperty(PropertyType.EDGE);
PgxFuture<AllPaths<Integer>> promise = analyst.shortestPathHopDistReverseAsync(
graph, src, distance, parent, parentEdge);
promise.thenAccept(paths -> {
...;
});
public oracle.pgx.api.beta.mllib.SupervisedGraphWiseModelBuilder supervisedGraphWiseModelBuilder()
public <ID> VertexProperty<ID,java.lang.Integer> topologicalSchedule(PgxGraph graph, VertexSet<ID> source) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Topological schedule gives an order of visit for the reachable vertices from the source
Topological schedule sets an order over the vertices in a graph based on the proximity these have to the vertices from the given source. The algorithm does a BFS travesarl for each vertex from the source set in order to assign the correct scheduling order to all the reachable, even if the graph is undirected or has cycles. The vertices that are not reachable will be assigned a value of -1.
The implementation of this algorithm uses a built-in BFS feature.
O(k * (V + E)) with V = number of vertices, E = number of edges, k = size of the source set
O(V) with V = number of vertices
graph
- the graph.source
- set of vertices to be used as the starting points for the scheduling order.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> topoSched = analyst.topologicalSchedule(graph, source);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + topoSched.getName() + " WHERE (x) ORDER BY x." + topoSched.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Integer> topologicalSchedule(PgxGraph graph, VertexSet<ID> source, VertexProperty<ID,java.lang.Integer> topoSched) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Topological sort gives an order of visit for vertices in directed acyclic graphs
Topological sort tries to set an order over the vertices in a graph using the direction of the edges. A directed graph has a topological order if and only if it has no cycles, i.e. it is a directed acyclic graph. The algorithm visits the vertices in a DFS-like fashion to set up their order. The order of the vertices is returned as a vertex property, and the values will be set to -1 if there is a cycle in the graph.
The implementation of this algorithm is sequential due the ordering constraint.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.source
- set of vertices to be used as the starting points for the scheduling order.topoSched
- (out argument)
vertex property holding the scheduled order of each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> prop = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, Integer> topoSched = analyst.topologicalSchedule(graph, source, prop);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + topoSched.getName() + " WHERE (x) ORDER BY x." + topoSched.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> topologicalScheduleAsync(PgxGraph graph, VertexSet<ID> source)
Topological schedule gives an order of visit for the reachable vertices from the source
Topological schedule sets an order over the vertices in a graph based on the proximity these have to the vertices from the given source. The algorithm does a BFS travesarl for each vertex from the source set in order to assign the correct scheduling order to all the reachable, even if the graph is undirected or has cycles. The vertices that are not reachable will be assigned a value of -1.
The implementation of this algorithm uses a built-in BFS feature.
O(k * (V + E)) with V = number of vertices, E = number of edges, k = size of the source set
O(V) with V = number of vertices
graph
- the graph.source
- set of vertices to be used as the starting points for the scheduling order.
PgxGraph graph = ...;
VertexSet<Integer> source = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.topologicalScheduleAsync(graph, source);
promise.thenCompose(topoSched -> graph.queryPgqlAsync(
"SELECT x, x." + topoSched.getName() + " WHERE (x) ORDER BY x." + topoSched.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> topologicalScheduleAsync(PgxGraph graph, VertexSet<ID> source, VertexProperty<ID,java.lang.Integer> topoSched)
Topological schedule gives an order of visit for the reachable vertices from the source
Topological schedule sets an order over the vertices in a graph based on the proximity these have to the vertices from the given source. The algorithm does a BFS travesarl for each vertex from the source set in order to assign the correct scheduling order to all the reachable, even if the graph is undirected or has cycles. The vertices that are not reachable will be assigned a value of -1.
The implementation of this algorithm uses a built-in BFS feature.
O(k * (V + E)) with V = number of vertices, E = number of edges, k = size of the source set
O(V) with V = number of vertices
graph
- the graph.source
- set of vertices to be used as the starting points for the scheduling order.topoSched
- (out argument)
vertex property holding the scheduled order of each vertex.
PgxGraph graph = ...;
VertexSet<Integer> source = graph.getVertices(VertexFilter.fromExpression("vertex.prop1 < 10"));
VertexProperty<Integer, Integer> topoSched = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.topologicalScheduleAsync(graph, source, topoSched);
promise.thenCompose(topoSched -> graph.queryPgqlAsync(
"SELECT x, x." + topoSched.getName() + " WHERE (x) ORDER BY x." + topoSched.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Integer> topologicalSort(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Topological sort gives an order of visit for vertices in directed acyclic graphs
Topological sort tries to set an order over the vertices in a graph using the direction of the edges. A directed graph has a topological order if and only if it has no cycles, i.e. it is a directed acyclic graph. The algorithm visits the vertices in a DFS-like fashion to set up their order. The order of the vertices is returned as a vertex property, and the values will be set to -1 if there is a cycle in the graph.
The implementation of this algorithm is sequential due the ordering constraint.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> topoSort = analyst.topologicalSort(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + topoSort.getName() + " WHERE (x) ORDER BY x." + topoSort.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Integer> topologicalSort(PgxGraph graph, VertexProperty<ID,java.lang.Integer> topoSort) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Topological sort gives an order of visit for vertices in directed acyclic graphs
Topological sort tries to set an order over the vertices in a graph using the direction of the edges. A directed graph has a topological order if and only if it has no cycles, i.e. it is a directed acyclic graph. The algorithm visits the vertices in a DFS-like fashion to set up their order. The order of the vertices is returned as a vertex property, and the values will be set to -1 if there is a cycle in the graph.
The implementation of this algorithm is sequential due the ordering constraint.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.topoSort
- (out argument) vertex property holding the topological order of each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> prop = graph.createVertexProperty(PropertyType.INTEGER);
VertexProperty<Integer, Integer> topoSort = analyst.topologicalSort(graph, prop);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + topoSort.getName() + " WHERE (x) ORDER BY x." + topoSort.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> topologicalSortAsync(PgxGraph graph)
Topological sort gives an order of visit for vertices in directed acyclic graphs
Topological sort tries to set an order over the vertices in a graph using the direction of the edges. A directed graph has a topological order if and only if it has no cycles, i.e. it is a directed acyclic graph. The algorithm visits the vertices in a DFS-like fashion to set up their order. The order of the vertices is returned as a vertex property, and the values will be set to -1 if there is a cycle in the graph.
The implementation of this algorithm is sequential due the ordering constraint.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.topologicalSortAsync(graph);
promise.thenCompose(topoSort -> graph.queryPgqlAsync(
"SELECT x, x." + topoSort.getName() + " WHERE (x) ORDER BY x." + topoSort.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Integer>> topologicalSortAsync(PgxGraph graph, VertexProperty<ID,java.lang.Integer> topoSort)
Topological sort gives an order of visit for vertices in directed acyclic graphs
Topological sort tries to set an order over the vertices in a graph using the direction of the edges. A directed graph has a topological order if and only if it has no cycles, i.e. it is a directed acyclic graph. The algorithm visits the vertices in a DFS-like fashion to set up their order. The order of the vertices is returned as a vertex property, and the values will be set to -1 if there is a cycle in the graph.
The implementation of this algorithm is sequential due the ordering constraint.
O(V + E) with V = number of vertices, E = number of edges
O(2 * V) with V = number of vertices
graph
- the graph.topoSort
- (out argument) vertex property holding the topological order of each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Integer> topoSort = graph.createVertexProperty(PropertyType.INTEGER);
PgxFuture<VertexProperty<Integer, Integer>> promise = analyst.topologicalSortAsync(graph, topoSort);
promise.thenCompose(topoSort -> graph.queryPgqlAsync(
"SELECT x, x." + topoSort.getName() + " WHERE (x) ORDER BY x." + topoSort.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public java.lang.String toString()
toString
in class java.lang.Object
public <ID> VertexProperty<ID,java.lang.Double> vertexBetweennessCentrality(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
The Betweenness Centrality of a vertex V in a graph is the sum of the fraction of shortests paths that pass through V from all the possible shortest paths connecting every possible pair of vertices S, T in the graph, such that V is different from S and T. Because of its definition, the algorithm is meant for connected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
VertexProperty<Integer, Double> betweenness = analyst.vertexBetweennessCentrality(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> vertexBetweennessCentrality(PgxGraph graph, VertexProperty<ID,java.lang.Double> bc) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
The Betweenness Centrality of a vertex V in a graph is the sum of the fraction of shortests paths that pass through V from all the possible shortest paths connecting every possible pair of vertices S, T in the graph, such that V is different from S and T. Because of its definition, the algorithm is meant for connected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> betweenness = analyst.vertexBetweennessCentrality(graph, bc);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> vertexBetweennessCentralityAsync(PgxGraph graph)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
The Betweenness Centrality of a vertex V in a graph is the sum of the fraction of shortests paths that pass through V from all the possible shortest paths connecting every possible pair of vertices S, T in the graph, such that V is different from S and T. Because of its definition, the algorithm is meant for connected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.vertexBetweennessCentralityAsync(graph);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> vertexBetweennessCentralityAsync(PgxGraph graph, VertexProperty<ID,java.lang.Double> bc)
Betweenness centrality measures the centrality of the vertices to identify important vertices for the flow of information
The Betweenness Centrality of a vertex V in a graph is the sum of the fraction of shortests paths that pass through V from all the possible shortest paths connecting every possible pair of vertices S, T in the graph, such that V is different from S and T. Because of its definition, the algorithm is meant for connected graphs.
The implementation of this algorithm uses a parallel BFS method called Multi-Source BFS (MS-BSF) for a faster and more efficient search of the shortests paths. It is an expensive algorithm to run on large graphs.
O(V * E) with V = number of vertices, E = number of edges
O(3 * V) with V = number of vertices
graph
- the graph.bc
- (out argument)
vertex property holding the betweenness centrality value for each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Double> bc = graph.createVertexProperty(PropertyType.DOUBLE);
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.vertexBetweennessCentralityAsync(graph, bc);
promise.thenCompose(betweenness -> graph.queryPgqlAsync(
"SELECT x, x." + betweenness.getName() + " MATCH (x) ORDER BY x." + betweenness.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Partition<ID> wcc(PgxGraph graph) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Identifying weakly connected components can be useful for clustering graph data
This algorithm finds weakly connected components (WCC) in a directed graph. A WCC is a maximal subset of vertices of the graph with the particular characteristic that for every pair of vertices U and V in the WCC there must be a directed path connecting U to V or viceversa. It is a non-deterministic algorithm because of its parallelized implementation.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * d) with d = diameter of the graph
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
Partition<Integer> wcc = analyst.wcc(graph);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + wcc.getPropertyName() + " MATCH (x) ORDER BY x." + wcc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Partition<ID> wcc(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
Identifying weakly connected components can be useful for clustering graph data
This algorithm finds weakly connected components (WCC) in a directed graph. A WCC is a maximal subset of vertices of the graph with the particular characteristic that for every pair of vertices U and V in the WCC there must be a directed path connecting U to V or viceversa. It is a non-deterministic algorithm because of its parallelized implementation.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * d) with d = diameter of the graph
O(2 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the WCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
Partition<Integer> wcc = analyst.wcc(graph, pd);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + wcc.getPropertyName() + " MATCH (x) ORDER BY x." + wcc.getPropertyName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Partition<ID>> wccAsync(PgxGraph graph)
Identifying weakly connected components can be useful for clustering graph data
This algorithm finds weakly connected components (WCC) in a directed graph. A WCC is a maximal subset of vertices of the graph with the particular characteristic that for every pair of vertices U and V in the WCC there must be a directed path connecting U to V or viceversa. It is a non-deterministic algorithm because of its parallelized implementation.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * d) with d = diameter of the graph
O(2 * V) with V = number of vertices
graph
- the graph.
PgxGraph graph = ...;
PgxFuture<Partition<Integer>> promise = analyst.wccAsync(graph);
promise.thenCompose(wcc -> graph.queryPgqlAsync(
"SELECT x, x." + wcc.getPropertyName() + " MATCH (x) ORDER BY x." + wcc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<Partition<ID>> wccAsync(PgxGraph graph, java.lang.String partitonDistributionName)
public <ID> PgxFuture<Partition<ID>> wccAsync(PgxGraph graph, VertexProperty<ID,java.lang.Long> partitionDistribution)
Identifying weakly connected components can be useful for clustering graph data
This algorithm finds weakly connected components (WCC) in a directed graph. A WCC is a maximal subset of vertices of the graph with the particular characteristic that for every pair of vertices U and V in the WCC there must be a directed path connecting U to V or viceversa. It is a non-deterministic algorithm because of its parallelized implementation.
This algorithm is designed to run in parallel given its high degree of parallelization.
O(E * d) with d = diameter of the graph
O(2 * V) with V = number of vertices
graph
- the graph.partitionDistribution
- vertex property holding the label of the WCC assigned to each vertex.
PgxGraph graph = ...;
VertexProperty<Integer, Long> pd = graph.createVertexProperty(PropertyType.LONG);
PgxFuture<Partition<Integer>> promise = analyst.wccAsync(graph, pd);
promise.thenCompose(wcc -> graph.queryPgqlAsync(
"SELECT x, x." + wcc.getPropertyName() + " MATCH (x) ORDER BY x." + wcc.getPropertyName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, 0.001, 0.85, 100, false, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, 0.001, 0.85, 100, false, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, double e, double d, int max, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, 0.001, 0.85, 100, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, 0.001, 0.85, 100, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, EdgeProperty<java.lang.Double> weight) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, cost);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> VertexProperty<ID,java.lang.Double> weightedPagerank(PgxGraph graph, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
VertexProperty<Integer, Double> pagerank = analyst.weightedPagerank(graph, cost, rank);
PgqlResultSet rs = graph.queryPgql(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC");
rs.print();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, boolean norm, EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(graph, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(graph, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(
graph, 0.001, 0.85, 100, false, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, double e, double d, int max, boolean norm, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.norm
- boolean flag to determine whether the algorithm will take into account dangling vertices for the ranking scores.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(
graph, 0.001, 0.85, 100, false, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, double e, double d, int max, EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(graph, 0.001, 0.85, 100, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, double e, double d, int max, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.e
- maximum tolerated error value. The algorithm will stop once the sum of the error values of all vertices becomes smaller than this value.d
- damping factor.max
- maximum number of iterations that will be performed.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(
graph, 0.001, 0.85, 100, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.
PgxGraph graph = ...;
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(graph, cost);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> PgxFuture<VertexProperty<ID,java.lang.Double>> weightedPagerankAsync(PgxGraph graph, EdgeProperty<java.lang.Double> weight, VertexProperty<ID,java.lang.Double> rank)
PageRank on weighted edges. It compares and spots out important vertices in a graph
The Weighted PageRank works like the original PageRank algorithm, except that it allows for a weight value assigned to each edge. This weight determines the fraction of the PageRank score that will flow from the source vertex through the current edge to its destination vertex.
The implementation of this algorithm uses an iterative method. The PageRank values of all the vertices in the graph are computed, hence updated, at each iteration step.
O(E * k) with E = number of edges, k <= maximum number of iterations< code>=>
O(2 * V) with V = number of vertices
graph
- the graph.weight
- edge property holding the weight of each edge in the graph.rank
- (out argument) vertex property holding the (normalized) PageRank value for each vertex (a value between 0 and 1).
PgxGraph graph = ...;
VertexProperty<Integer, Double> rank = graph.createVertexProperty(PropertyType.DOUBLE);
EdgeProperty<Double> cost = graph.getEdgeProperty("cost");
PgxFuture<VertexProperty<Integer, Double>> promise = analyst.weightedPagerankAsync(graph, cost, rank);
promise.thenCompose(pagerank -> graph.queryPgqlAsync(
"SELECT x, x." + pagerank.getName() + " MATCH (x) ORDER BY x." + pagerank.getName() + " DESC"))
.thenAccept(PgqlResultSet::print);
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, ID vertexId, int topK) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
whomToFollow(PgxGraph, PgxVertex, int)
taking a vertex ID instead of a
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, ID vertexId, int topK, int sizeCircleOfTrust) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
whomToFollow(PgxGraph, PgxVertex, int, int)
taking a vertex ID instead of a
PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, ID vertexId, int topK, int sizeCircleOfTrust, int maxIter, java.math.BigDecimal tol, java.math.BigDecimal dampingFactor, int salsaMaxIter, java.math.BigDecimal salsaTol) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
whomToFollow(PgxGraph, PgxVertex, int, int, int, BigDecimal, BigDecimal, int, BigDecimal)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, ID vertexId, int topK, int sizeCircleOfTrust, int maxIter, java.math.BigDecimal tol, java.math.BigDecimal dampingFactor, int salsaMaxIter, java.math.BigDecimal salsaTol, VertexSequence<ID> hubs, VertexSequence<ID> authorities) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
whomToFollow(PgxGraph, PgxVertex, int, int, int, BigDecimal, BigDecimal, int, BigDecimal, VertexSequence,
VertexSequence)
taking a vertex ID instead of a PgxVertex
.java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf = analyst.whomToFollow(graph, vertex);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf = analyst.whomToFollow(graph, vertex, 100);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf = analyst.whomToFollow(graph, vertex, 100, 500);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, int maxIter, double tol, double dampingFactor, int salsaMaxIter, double salsaTol) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.maxIter
- maximum number of iterations that will be performed for the Pagerank stage.tol
- maximum tolerated error value for the Pagerank stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.dampingFactor
- damping factor for the Pagerank stage.salsaMaxIter
- maximum number of iterations that will be performed for the SALSA stage.salsaTol
- maximum tolerated error value for the SALSA stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf =
analyst.whomToFollow(graph, vertex, 100, 500, 100, 0.001, 0.85, 100, 0.001);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, int maxIter, double tol, double dampingFactor, int salsaMaxIter, double salsaTol, VertexSequence<ID> hubs, VertexSequence<ID> authorities) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.maxIter
- maximum number of iterations that will be performed for the Pagerank stage.tol
- maximum tolerated error value for the Pagerank stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.dampingFactor
- damping factor for the Pagerank stage.salsaMaxIter
- maximum number of iterations that will be performed for the SALSA stage.salsaTol
- maximum tolerated error value for the SALSA stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> auth = graph.createVertexSequence();
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf =
analyst.whomToFollow(graph, vertex, 100, 500, 100, 0.001, 0.85, 100, 0.001, hubs, auth);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, VertexSequence<ID> hubs, VertexSequence<ID> authorities) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> auth = graph.createVertexSequence();
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf =
analyst.whomToFollow(graph, vertex, 100, 500, hubs, auth);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, int topK, VertexSequence<ID> hubs, VertexSequence<ID> authorities) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> auth = graph.createVertexSequence();
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf = analyst.whomToFollow(graph, vertex, 100, hubs, auth);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> Pair<VertexSequence<ID>,VertexSequence<ID>> whomToFollow(PgxGraph graph, PgxVertex<ID> vertex, VertexSequence<ID> hubs, VertexSequence<ID> authorities) throws java.util.concurrent.ExecutionException, java.lang.InterruptedException
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> auth = graph.createVertexSequence();
PgxVertex<Integer> vertex = graph.getVertex(128);
Pair<VertexSequence<Integer>, VertexSequence<Integer>> wtf = analyst.whomToFollow(graph, vertex, hubs, auth);
wtf.getFirst();
wtf.getSecond();
java.util.concurrent.ExecutionException
java.lang.InterruptedException
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100, 500);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, int maxIter, double tol, double dampingFactor, int salsaMaxIter, double salsaTol)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.maxIter
- maximum number of iterations that will be performed for the Pagerank stage.tol
- maximum tolerated error value for the Pagerank stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.dampingFactor
- damping factor for the Pagerank stage.salsaMaxIter
- maximum number of iterations that will be performed for the SALSA stage.salsaTol
- maximum tolerated error value for the SALSA stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100, 500, 100, 0.001, 0.85, 100, 0.001);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, int maxIter, double tol, double dampingFactor, int salsaMaxIter, double salsaTol, VertexSequence<ID> hubs, VertexSequence<ID> authorities)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.maxIter
- maximum number of iterations that will be performed for the Pagerank stage.tol
- maximum tolerated error value for the Pagerank stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.dampingFactor
- damping factor for the Pagerank stage.salsaMaxIter
- maximum number of iterations that will be performed for the SALSA stage.salsaTol
- maximum tolerated error value for the SALSA stage. The stage will stop once the sum of the error values of all vertices becomes smaller than this value.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> authorities = graph.createVertexSequence();
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100, 500, 100, 0.001, 0.85, 100, 0.001, hubs, authorities);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK, int sizeCircleOfTrust, VertexSequence<ID> hubs, VertexSequence<ID> authorities)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.sizeCircleOfTrust
- the maximum size of the circle of trust.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> authorities = graph.createVertexSequence();
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100, 500, hubs, authorities);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, int topK, VertexSequence<ID> hubs, VertexSequence<ID> authorities)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.topK
- the maximum number of recommendations that will be returned. This number should be smaller than the size of the circle of trust.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> authorities = graph.createVertexSequence();
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, 100, hubs, authorities);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
public <ID> PgxFuture<Pair<VertexSequence<ID>,VertexSequence<ID>>> whomToFollowAsync(PgxGraph graph, PgxVertex<ID> vertex, VertexSequence<ID> hubs, VertexSequence<ID> authorities)
WTF is a recommendation algorithm. It returns two vertex sequences: one of similar users and a second one with users to follow.
The Whom To Follow algorithm is composed by two main stages: the first one is meant to get the relevant vertices (users) for a given source vertex (particular user), which in this implementation is done with personalized Pagerank for the given source vertex. While the second stage analizes the relationships between the relevant vertices previously found through the edges linking them with their neighbors. This second stage relies on SALSA algorithm and it asigns a ranking score to all the hubs and authority vertices, so the recommendations can come from this assigned values. Whom To Follow takes the concept of authority and hub vertices, and adapts it to users in social networks. The hub vertices become similar users with respect to the given source vertex (also an user), and the authority vertices are translated into users that might be on the interest of the source vertex, i.e. users to follow.
The implementation of this algorithm uses an iterative method. It will converge once it reaches the error tolerance criteria or the maximum number of iterations.
O(E * (p + s)) with E = number of edges, p <= maximum number of iterations for the pagerank step, s <="maximum" salsa step< code>=>
O(5 * V) with V = number of vertices
graph
- the graph.vertex
- the chosen vertex from the graph for personalization of the recommendations.hubs
- (out argument) vertex sequence holding the top rated hub vertices (similar users) for the recommendations.authorities
- (out argument) vertex sequence holding the top rated authority vertices (users to follow) for the recommendations.
PgxGraph graph = ...;
PgxVertex<Integer> vertex = graph.getVertex(128);
VertexSequence<Integer> hubs = graph.createVertexSequence();
VertexSequence<Integer> authorities = graph.createVertexSequence();
PgxFuture<Pair<VertexSequence<Integer>, VertexSequence<Integer>>> promise = analyst.whomToFollowAsync(
graph, vertex, hubs, authorities);
promise.thenAccept(wtf -> {
wtf.getFirst();
wtf.getSecond();
});
Copyright © 2015 - 2020 Oracle and/or its affiliates. All Rights Reserved.