You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When writing a large amount of data to an existing non-partitioned BigQuery table using WRITE_TRUNCATE, Beam will use temp tables to write intermediate results before copying to the destination table. These temp tables will inherit the destination table's schema if SchemaUpdateOptions are not set (noted in #24471). If a schema change is written, this will fail as the temp table's schema does not match the schema of the data being written:
"Error while reading data, error message: JSON parsing error in row starting at position : No such field: ".
Setting the SchemaUpdateOptions fixes this issue. However, if the same code is used to write a small amount of data to the same table, temp tables will not be used, and since we are writing directly to the destination with WRITE_TRUNCATE, SchemaUpdateOptions are not allowed (noted in BigQuery API documentation):
"Schema update options should only be specified with WRITE_APPEND disposition, or with WRITE_TRUNCATE disposition on a table partition."
This results in inconsistent behavior, where the write will fail depending on the dataset size, regardless of SchemaUpdateOptions being set.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Infrastructure
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What happened?
When writing a large amount of data to an existing non-partitioned BigQuery table using WRITE_TRUNCATE, Beam will use temp tables to write intermediate results before copying to the destination table. These temp tables will inherit the destination table's schema if SchemaUpdateOptions are not set (noted in #24471). If a schema change is written, this will fail as the temp table's schema does not match the schema of the data being written:
"Error while reading data, error message: JSON parsing error in row starting at position : No such field: ".
Setting the SchemaUpdateOptions fixes this issue. However, if the same code is used to write a small amount of data to the same table, temp tables will not be used, and since we are writing directly to the destination with WRITE_TRUNCATE, SchemaUpdateOptions are not allowed (noted in BigQuery API documentation):
"Schema update options should only be specified with WRITE_APPEND disposition, or with WRITE_TRUNCATE disposition on a table partition."
This results in inconsistent behavior, where the write will fail depending on the dataset size, regardless of SchemaUpdateOptions being set.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: