public class TakeOrdered extends SparkPlan implements UnaryNode, scala.Product, scala.Serializable
Limit operator after a Sort operator. This could have been named TopK, but
Spark's top operator does the opposite in ordering so we name it TakeOrdered to avoid confusion.| Constructor and Description |
|---|
TakeOrdered(int limit,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> sortOrder,
SparkPlan child) |
| Modifier and Type | Method and Description |
|---|---|
SparkPlan |
child() |
RDD<org.apache.spark.sql.catalyst.expressions.Row> |
execute()
Runs this query returning the result as an RDD.
|
org.apache.spark.sql.catalyst.expressions.Row[] |
executeCollect()
Runs this query returning the result as an array.
|
int |
limit() |
org.apache.spark.sql.catalyst.expressions.RowOrdering |
ord() |
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> |
output() |
org.apache.spark.sql.catalyst.plans.physical.SinglePartition$ |
outputPartitioning()
Specifies how data is partitioned across different nodes in the cluster.
|
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> |
sortOrder() |
codegenEnabled, makeCopy, requiredChildDistributionexpressions, inputSet, missingInput, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1, org$apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionUp$1, outputSet, printSchema, references, schema, schemaString, simpleString, statePrefix, transformAllExpressions, transformExpressions, transformExpressionsDown, transformExpressionsUpapply, argString, asCode, children, collect, fastEquals, flatMap, foreach, generateTreeString, getNodeNumbered, map, mapChildren, nodeName, numberedTreeString, otherCopyArgs, stringArgs, toString, transform, transformChildrenDown, transformChildrenUp, transformDown, transformUp, treeString, withNewChildrenproductArity, productElement, productIterator, productPrefixinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic TakeOrdered(int limit,
scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> sortOrder,
SparkPlan child)
public int limit()
public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.SortOrder> sortOrder()
public SparkPlan child()
child in interface org.apache.spark.sql.catalyst.trees.UnaryNode<SparkPlan>public scala.collection.Seq<org.apache.spark.sql.catalyst.expressions.Attribute> output()
output in class org.apache.spark.sql.catalyst.plans.QueryPlan<SparkPlan>public org.apache.spark.sql.catalyst.plans.physical.SinglePartition$ outputPartitioning()
SparkPlanoutputPartitioning in class SparkPlanpublic org.apache.spark.sql.catalyst.expressions.RowOrdering ord()
public org.apache.spark.sql.catalyst.expressions.Row[] executeCollect()
SparkPlanexecuteCollect in class SparkPlan