public static class FlinkSink.Builder
extends java.lang.Object
Modifier and Type | Method and Description |
---|---|
org.apache.flink.streaming.api.datastream.DataStreamSink<java.lang.Void> |
append()
Append the iceberg sink operators to write records to iceberg table.
|
FlinkSink.Builder |
distributionMode(DistributionMode mode)
Configure the write
DistributionMode that the flink sink will use. |
FlinkSink.Builder |
equalityFieldColumns(java.util.List<java.lang.String> columns)
Configuring the equality field columns for iceberg table that accept CDC or UPSERT events.
|
FlinkSink.Builder |
flinkConf(org.apache.flink.configuration.ReadableConfig config) |
FlinkSink.Builder |
overwrite(boolean newOverwrite) |
FlinkSink.Builder |
set(java.lang.String property,
java.lang.String value)
Set the write properties for Flink sink.
|
FlinkSink.Builder |
setAll(java.util.Map<java.lang.String,java.lang.String> properties)
Set the write properties for Flink sink.
|
FlinkSink.Builder |
setSnapshotProperties(java.util.Map<java.lang.String,java.lang.String> properties) |
FlinkSink.Builder |
setSnapshotProperty(java.lang.String property,
java.lang.String value) |
FlinkSink.Builder |
table(Table newTable)
|
FlinkSink.Builder |
tableLoader(TableLoader newTableLoader)
The table loader is used for loading tables in
IcebergFilesCommitter lazily, we need
this loader because Table is not serializable and could not just use the loaded table
from Builder#table in the remote task manager. |
FlinkSink.Builder |
tableSchema(org.apache.flink.table.api.TableSchema newTableSchema) |
FlinkSink.Builder |
toBranch(java.lang.String branch) |
FlinkSink.Builder |
uidPrefix(java.lang.String newPrefix)
Set the uid prefix for FlinkSink operators.
|
FlinkSink.Builder |
upsert(boolean enabled)
All INSERT/UPDATE_AFTER events from input stream will be transformed to UPSERT events, which
means it will DELETE the old records and then INSERT the new records.
|
FlinkSink.Builder |
writeParallelism(int newWriteParallelism)
Configuring the write parallel number for iceberg stream writer.
|
public FlinkSink.Builder table(Table newTable)
Table
instance is used for initializing IcebergStreamWriter
which will write all the records into DataFile
s and emit them to downstream operator.
Providing a table would avoid so many table loading from each separate task.newTable
- the loaded iceberg table instance.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder tableLoader(TableLoader newTableLoader)
IcebergFilesCommitter
lazily, we need
this loader because Table
is not serializable and could not just use the loaded table
from Builder#table in the remote task manager.newTableLoader
- to load iceberg table inside tasks.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder set(java.lang.String property, java.lang.String value)
FlinkWriteOptions
public FlinkSink.Builder setAll(java.util.Map<java.lang.String,java.lang.String> properties)
FlinkWriteOptions
public FlinkSink.Builder tableSchema(org.apache.flink.table.api.TableSchema newTableSchema)
public FlinkSink.Builder overwrite(boolean newOverwrite)
public FlinkSink.Builder flinkConf(org.apache.flink.configuration.ReadableConfig config)
public FlinkSink.Builder distributionMode(DistributionMode mode)
DistributionMode
that the flink sink will use. Currently, flink
support DistributionMode.NONE
and DistributionMode.HASH
.mode
- to specify the write distribution mode.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder writeParallelism(int newWriteParallelism)
newWriteParallelism
- the number of parallel iceberg stream writer.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder upsert(boolean enabled)
enabled
- indicate whether it should transform all INSERT/UPDATE_AFTER events to UPSERT.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder equalityFieldColumns(java.util.List<java.lang.String> columns)
columns
- defines the iceberg table's key.FlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder uidPrefix(java.lang.String newPrefix)
pipeline.auto-generate-uid=false
to disable auto-generation and force
explicit setting of all operator uid. --allowNonRestoredState
to ignore the previous sink state. During restore Flink sink state is used to check if last
commit was actually successful or not. --allowNonRestoredState
can lead to data loss
if the Iceberg commit failed in the last completed checkpoint.newPrefix
- prefix for Flink sink operator uid and nameFlinkSink.Builder
to connect the iceberg table.public FlinkSink.Builder setSnapshotProperties(java.util.Map<java.lang.String,java.lang.String> properties)
public FlinkSink.Builder setSnapshotProperty(java.lang.String property, java.lang.String value)
public FlinkSink.Builder toBranch(java.lang.String branch)
public org.apache.flink.streaming.api.datastream.DataStreamSink<java.lang.Void> append()
DataStreamSink
for sink.