Name | Description | Type | Package | Framework |
AggregatedDialect | AggregatedDialect can unify multiple dialects into one virtual Dialect. | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
Aggregator | A base class for user-defined aggregations, which can be used in DataFrame and Dataset operations to take all of the elements of a group and reduce them to a single value. | Class | org.apache.spark.sql.expressions | Apache Spark |
|
AnalysisException | Thrown when a query fails to analyze, usually because the query itself is invalid. | Class | org.apache.spark.sql | Apache Spark |
|
And | A filter that evaluates to true iff both left or right evaluate to true. | Class | org.apache.spark.sql.sources | Apache Spark |
|
ArrayType | | Class | org.apache.spark.sql.types | Apache Spark |
|
BaseRelation | Represents a collection of tuples with a known schema. | Class | org.apache.spark.sql.sources | Apache Spark |
|
BinaryType | The data type representing Array[Byte] values. | Class | org.apache.spark.sql.types | Apache Spark |
|
BooleanType | The data type representing Boolean values. | Class | org.apache.spark.sql.types | Apache Spark |
|
ByteType | The data type representing Byte values. | Class | org.apache.spark.sql.types | Apache Spark |
|
CalendarIntervalType | The data type representing calendar time intervals. | Class | org.apache.spark.sql.types | Apache Spark |
|
CatalystScan | An interface for experimenting with a more direct connection to the query planner. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
Column | A column that will be computed based on the data in a DataFrame. | Class | org.apache.spark.sql | Apache Spark |
|
ColumnName | A convenient class used for constructing schema. | Class | org.apache.spark.sql | Apache Spark |
|
CreatableRelationProvider | | Interface | org.apache.spark.sql.sources | Apache Spark |
|
DataFrame | A distributed collection of data organized into named columns. | Class | org.apache.spark.sql | Apache Spark |
|
DataFrameHolder | A container for a DataFrame, used for implicit conversions. | Class | org.apache.spark.sql | Apache Spark |
|
DataFrameNaFunctions | Functionality for working with missing data in DataFrames. | Class | org.apache.spark.sql | Apache Spark |
|
DataFrameReader | Interface used to load a DataFrame from external storage systems (e. | Class | org.apache.spark.sql | Apache Spark |
|
DataFrameStatFunctions | Statistic functions for DataFrames. | Class | org.apache.spark.sql | Apache Spark |
|
DataFrameWriter | Interface used to write a DataFrame to external storage systems (e. | Class | org.apache.spark.sql | Apache Spark |
|
Dataset | A Dataset is a strongly typed collection of objects that can be transformed in parallel using functional or relational operations. | Class | org.apache.spark.sql | Apache Spark |
|
DatasetHolder | A container for a Dataset, used for implicit conversions. | Class | org.apache.spark.sql | Apache Spark |
|
DataSourceRegister | Data sources should implement this trait so that they can register an alias to their data source. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
DataType | The base type of all Spark SQL data types. | Class | org.apache.spark.sql.types | Apache Spark |
|
DataTypes | To get/create specific data type, users should use singleton objects and factory methods provided by this class. | Class | org.apache.spark.sql.types | Apache Spark |
|
DateType | A date type, supporting "0001-01-01" through "9999-12-31". | Class | org.apache.spark.sql.types | Apache Spark |
|
DB2Dialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
Decimal | A mutable implementation of BigDecimal that can hold a Long if values are small enough. | Class | org.apache.spark.sql.types | Apache Spark |
|
DecimalType | | Class | org.apache.spark.sql.types | Apache Spark |
|
DerbyDialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
DoubleType | The data type representing Double values. | Class | org.apache.spark.sql.types | Apache Spark |
|
Encoder | Used to convert a JVM object of type T to and from the internal Spark SQL representation. | Interface | org.apache.spark.sql | Apache Spark |
|
Encoders | Methods for creating an Encoder. | Class | org.apache.spark.sql | Apache Spark |
|
EqualNullSafe | Performs equality comparison, similar to EqualTo. | Class | org.apache.spark.sql.sources | Apache Spark |
|
EqualTo | A filter that evaluates to true iff the attribute evaluates to a valueSince:1. | Class | org.apache.spark.sql.sources | Apache Spark |
|
ExecutionListenerManager | Manager for QueryExecutionListener. | Class | org.apache.spark.sql.util | Apache Spark |
|
ExperimentalMethods | Holder for experimental methods for the bravest. | Class | org.apache.spark.sql | Apache Spark |
|
Filter | A filter predicate for data sources. | Class | org.apache.spark.sql.sources | Apache Spark |
|
FloatType | The data type representing Float values. | Class | org.apache.spark.sql.types | Apache Spark |
|
functions | | Class | org.apache.spark.sql | Apache Spark |
|
GreaterThan | A filter that evaluates to true iff the attribute evaluates to a value greater than value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
GreaterThanOrEqual | A filter that evaluates to true iff the attribute evaluates to a value greater than or equal to value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
GroupedData | A set of methods for aggregations on a DataFrame, created by DataFrame. | Class | org.apache.spark.sql | Apache Spark |
|
GroupedDataset | A Dataset has been logically grouped by a user specified grouping key. | Class | org.apache.spark.sql | Apache Spark |
|
HadoopFsRelation | A BaseRelation that provides much of the common code required for relations that store their data to an HDFS compatible filesystem. | Class | org.apache.spark.sql.sources | Apache Spark |
|
HadoopFsRelation .FakeFileStatus | | Class | org.apache.spark.sql.sources.HadoopFsRelation | Apache Spark |
|
HadoopFsRelation .FakeFileStatus$ | | Class | org.apache.spark.sql.sources.HadoopFsRelation | Apache Spark |
|
HadoopFsRelationProvider | Implemented by objects that produce relations for a specific kind of data source with a given schema and partitioned columns. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
HiveContext | An instance of the Spark SQL execution engine that integrates with data stored in Hive. | Class | org.apache.spark.sql.hive | Apache Spark |
|
In | A filter that evaluates to true iff the attribute evaluates to one of the values in the array. | Class | org.apache.spark.sql.sources | Apache Spark |
|
InsertableRelation | A BaseRelation that can be used to insert data into it through the insert method. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
IntegerType | The data type representing Int values. | Class | org.apache.spark.sql.types | Apache Spark |
|
IsNotNull | A filter that evaluates to true iff the attribute evaluates to a non-null value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
IsNull | A filter that evaluates to true iff the attribute evaluates to null. | Class | org.apache.spark.sql.sources | Apache Spark |
|
JdbcDialect | Encapsulates everything (extensions, workarounds, quirks) to handle the SQL dialect of a certain database or jdbc driver. | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
JdbcDialects | Registry of dialects that apply to every new jdbc DataFrame. | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
JdbcType | A database type definition coupled with the jdbc type needed to send null values to the database. | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
LessThan | A filter that evaluates to true iff the attribute evaluates to a valueSince:1. | Class | org.apache.spark.sql.sources | Apache Spark |
|
LessThanOrEqual | A filter that evaluates to true iff the attribute evaluates to a value less than or equal to value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
LongType | The data type representing Long values. | Class | org.apache.spark.sql.types | Apache Spark |
|
MapType | The data type for Maps. | Class | org.apache.spark.sql.types | Apache Spark |
|
Metadata | Metadata is a wrapper over Map[String, Any] that limits the value type to simple ones: Boolean, Long, Double, String, Metadata, Array[Boolean], Array[Long], Array[Double], Array[String], and | Class | org.apache.spark.sql.types | Apache Spark |
|
MetadataBuilder | Builder for Metadata. | Class | org.apache.spark.sql.types | Apache Spark |
|
MsSqlServerDialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
MutableAggregationBuffer | A Row representing an mutable aggregation buffer. | Class | org.apache.spark.sql.expressions | Apache Spark |
|
MySQLDialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
NoopDialect | NOOP dialect object, always returning the neutral element. | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
Not | A filter that evaluates to true iff child is evaluated to false. | Class | org.apache.spark.sql.sources | Apache Spark |
|
NullType | The data type representing NULL values. | Class | org.apache.spark.sql.types | Apache Spark |
|
NumericType | Numeric data types. | Class | org.apache.spark.sql.types | Apache Spark |
|
Or | A filter that evaluates to true iff at least one of left or right evaluates to true. | Class | org.apache.spark.sql.sources | Apache Spark |
|
OracleDialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
OutputWriter | OutputWriter is used together with HadoopFsRelation for persisting rows to the underlying file system. | Class | org.apache.spark.sql.sources | Apache Spark |
|
OutputWriterFactory | A factory that produces OutputWriters. | Class | org.apache.spark.sql.sources | Apache Spark |
|
PostgresDialect | | Class | org.apache.spark.sql.jdbc | Apache Spark |
|
PrecisionInfo | Precision parameters for a DecimalSee Also:Serialized Form | Class | org.apache.spark.sql.types | Apache Spark |
|
PrunedFilteredScan | A BaseRelation that can eliminate unneeded columns and filter using selected predicates before producing an RDD containing all matching tuples as Row objects. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
PrunedScan | A BaseRelation that can eliminate unneeded columns before producing an RDD containing all of its tuples as Row objects. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
QueryExecutionListener | The interface of query execution listener that can be used to analyze execution metrics. | Interface | org.apache.spark.sql.util | Apache Spark |
|
RelationProvider | Implemented by objects that produce relations for a specific kind of data source. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
Row | Represents one row of output from a relational operator. | Interface | org.apache.spark.sql | Apache Spark |
|
RowFactory | A factory class used to construct Row objects. | Class | org.apache.spark.sql | Apache Spark |
|
SaveMode | SaveMode is used to specify the expected behavior of saving a DataFrame to a data source. | Class | org.apache.spark.sql | Apache Spark |
|
SchemaRelationProvider | Implemented by objects that produce relations for a specific kind of data source with a given schema. | Interface | org.apache.spark.sql.sources | Apache Spark |
|
ScriptTransformationWriterThread | | Class | org.apache.spark.sql.hive.execution | Apache Spark |
|
ShortType | The data type representing Short values. | Class | org.apache.spark.sql.types | Apache Spark |
|
SQLContext | The entry point for working with structured data (rows and columns) in Spark. | Class | org.apache.spark.sql | Apache Spark |
|
SQLImplicits | A collection of implicit methods for converting common Scala objects into DataFrames. | Class | org.apache.spark.sql | Apache Spark |
|
SQLUserDefinedType | A user-defined type which can be automatically recognized by a SQLContext and registered. | Class | org.apache.spark.sql.types | Apache Spark |
|
StringContains | A filter that evaluates to true iff the attribute evaluates to a string that contains the string value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
StringEndsWith | A filter that evaluates to true iff the attribute evaluates to a string that starts with value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
StringStartsWith | A filter that evaluates to true iff the attribute evaluates to a string that starts with value. | Class | org.apache.spark.sql.sources | Apache Spark |
|
StringType | The data type representing String values. | Class | org.apache.spark.sql.types | Apache Spark |
|
StructField | A field inside a StructType. | Class | org.apache.spark.sql.types | Apache Spark |
|
StructType | A StructType object can be constructed by StructType(fields: Seq[StructField]) | Class | org.apache.spark.sql.types | Apache Spark |
|
TimestampType | The data type representing java. | Class | org.apache.spark.sql.types | Apache Spark |
|
TypedColumn | A Column where an Encoder has been given for the expected input and return type. | Class | org.apache.spark.sql | Apache Spark |
|
UDF10 | A Spark SQL UDF that has 10 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF11 | A Spark SQL UDF that has 11 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF12 | A Spark SQL UDF that has 12 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF13 | A Spark SQL UDF that has 13 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF14 | A Spark SQL UDF that has 14 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF15 | A Spark SQL UDF that has 15 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF16 | A Spark SQL UDF that has 16 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF17 | A Spark SQL UDF that has 17 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF18 | A Spark SQL UDF that has 18 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF19 | A Spark SQL UDF that has 19 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF20 | A Spark SQL UDF that has 20 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF21 | A Spark SQL UDF that has 21 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF22 | A Spark SQL UDF that has 22 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF3 | A Spark SQL UDF that has 3 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF4 | A Spark SQL UDF that has 4 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF5 | A Spark SQL UDF that has 5 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF6 | A Spark SQL UDF that has 6 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF7 | A Spark SQL UDF that has 7 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF8 | A Spark SQL UDF that has 8 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDF9 | A Spark SQL UDF that has 9 arguments. | Interface | org.apache.spark.sql.api.java | Apache Spark |
|
UDFRegistration | Functions for registering user-defined functions. | Class | org.apache.spark.sql | Apache Spark |
|
UserDefinedAggregateFunction | The base class for implementing user-defined aggregate functions (UDAF). | Class | org.apache.spark.sql.expressions | Apache Spark |
|
UserDefinedFunction | A user-defined function. | Class | org.apache.spark.sql | Apache Spark |
|
UserDefinedType | The data type for User Defined Types (UDTs). | Class | org.apache.spark.sql.types | Apache Spark |
|
Window | Utility functions for defining window in DataFrames. | Class | org.apache.spark.sql.expressions | Apache Spark |
|
WindowSpec | A window specification that defines the partitioning, ordering, and frame boundaries. | Class | org.apache.spark.sql.expressions | Apache Spark |