public class WriteAheadLogBackedBlockRDD<T> extends BlockRDD<T>
| Constructor and Description |
|---|
WriteAheadLogBackedBlockRDD(SparkContext sc,
BlockId[] blockIds,
WriteAheadLogFileSegment[] segments,
boolean storeInBlockManager,
StorageLevel storageLevel,
scala.reflect.ClassTag<T> evidence$1) |
| Modifier and Type | Method and Description |
|---|---|
scala.collection.Iterator<T> |
compute(Partition split,
TaskContext context)
Gets the partition data by getting the corresponding block from the block manager.
|
Partition[] |
getPartitions()
Implemented by subclasses to return the set of partitions in this RDD.
|
scala.collection.Seq<String> |
getPreferredLocations(Partition split)
Get the preferred location of the partition.
|
assertValid, blockIds, isValid, locations_, removeBlocksaggregate, cache, cartesian, checkpoint, checkpointData, coalesce, collect, collect, collectPartitions, computeOrReadCheckpoint, conf, context, count, countApprox, countApproxDistinct, countApproxDistinct, countByValue, countByValueApprox, creationSite, dependencies, distinct, distinct, doCheckpoint, doubleRDDToDoubleRDDFunctions, elementClassTag, filter, filterWith, first, flatMap, flatMapWith, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getCreationSite, getNarrowAncestors, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, isEmpty, iterator, keyBy, map, mapPartitions, mapPartitionsWithContext, mapPartitionsWithIndex, mapPartitionsWithSplit, mapWith, markCheckpointed, max, min, name, numericRDDToDoubleRDDFunctions, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, rddToAsyncRDDActions, rddToOrderedRDDFunctions, rddToPairRDDFunctions, rddToSequenceFileRDDFunctions, reduce, repartition, retag, retag, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sortBy, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, treeAggregate, treeReduce, union, unpersist, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueIdinitializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarningpublic WriteAheadLogBackedBlockRDD(SparkContext sc, BlockId[] blockIds, WriteAheadLogFileSegment[] segments, boolean storeInBlockManager, StorageLevel storageLevel, scala.reflect.ClassTag<T> evidence$1)
public Partition[] getPartitions()
RDDgetPartitions in class BlockRDD<T>public scala.collection.Iterator<T> compute(Partition split, TaskContext context)
public scala.collection.Seq<String> getPreferredLocations(Partition split)
getPreferredLocations in class BlockRDD<T>