You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug, including details regarding any error messages, version, and platform.
Setup
I am using pyarrow version 18.0.0.
I am running my tests on an AWS r6g.large instance running Amazon Linux. (I also attempted using instances with larger memory in case the problem was that there was some base-level memory needed irrespective of minimal batch sizes and readahead, but this didn't help.)
My data consists of parquet files in S3, varying in size from a few hundred kB to ~ 1GB, for a total of 3.4GB. This is a sample subset of my actual dataset which is ~ 50GB.
Problem description
I have a set of parquet files with very small row-groups, and I am attempting to use the pyarrow.dataset API to transform this into a set of files with larger row-groups. My basic approach is dataset -> scanner -> write_dataset. After running into OOM problems with default parameters, I ratcheted down the read and write batch sizes and concurrent readahead:
frompyarrowimportdatasetasdsdata=ds.dataset(INPATH, format='parquet')
# note the small batch size and minimal values for readaheadscanner=data.scanner(
batch_size=50,
batch_readahead=1,
fragment_readahead=1
)
# again, note extremely small values for output batch sizesds.write_dataset(
scanner,
base_dir=str(OUTPATH),
format='parquet',
min_rows_per_group=1000,
max_rows_per_group=1000
)
Running this results in increasing memory consumption (monitored using top) until the process maxes out available memory and is finally killed.
What worked to keep memory use under control was to replace the dataset scanner with ParquetFile.iter_batches as below:
frompyarrowimportdatasetasdsimportpyarrow.parquetaspqdefbatcherator(filepath, batch_size):
forfinfilepath.glob('*.parquet'):
withpq.ParquetFile(f) aspf:
yieldfrompf.iter_batches(batch_size=batch_size)
scanner=batcherator(INPATH, 2000) # it's fine with higher batch size than previousds.write_dataset(
scanner,
base_dir=str(OUTPATH),
format='parquet',
min_rows_per_group=10_000, # again, higher values of write batch sizesmax_rows_per_group=10_000
)
Since nothing's really changing on the dataset.write_dataset side, it seems like there's some issue with runaway memory use on the scanner side of things?
The closest I could find online was this DuckDB issue duckdb/duckdb#7856 which in turn pointed to this arrow issue #31486 but this seems to hint more at a problem with write_dataset, which for me seemed ok once I replaced how I am reading in the data.
Component(s)
Python
The text was updated successfully, but these errors were encountered:
raulcd
changed the title
OOM with Dataset.scanner but ok with ParquetFile.iter_batches
[Python] OOM with Dataset.scanner but ok with ParquetFile.iter_batches
Nov 21, 2024
@raulcd thanks for taking a look at this. Let me know if there's any additional info I can provide. I think it would also be ok for me to share the data sample so you could see the actual data I'm working with. I can't give you access to our S3 but could probably upload it anyplace you like, it's ~ 3.4GB. Let me know!
@mapleFU I thought that the purpose of the batch_readahead and fragment_readahead parameters of Dataset.scanner was to control the level of parallel/advance reading of the data, in order to control memory usage. Is that not what these parameters do?
Describe the bug, including details regarding any error messages, version, and platform.
Setup
I am using
pyarrow
version18.0.0
.I am running my tests on an AWS
r6g.large
instance running Amazon Linux. (I also attempted using instances with larger memory in case the problem was that there was some base-level memory needed irrespective of minimal batch sizes and readahead, but this didn't help.)My data consists of parquet files in S3, varying in size from a few hundred kB to ~ 1GB, for a total of 3.4GB. This is a sample subset of my actual dataset which is ~ 50GB.
Problem description
I have a set of parquet files with very small row-groups, and I am attempting to use the
pyarrow.dataset
API to transform this into a set of files with larger row-groups. My basic approach isdataset -> scanner -> write_dataset
. After running into OOM problems with default parameters, I ratcheted down the read and write batch sizes and concurrent readahead:Running this results in increasing memory consumption (monitored using
top
) until the process maxes out available memory and is finally killed.What worked to keep memory use under control was to replace the
dataset
scanner withParquetFile.iter_batches
as below:Since nothing's really changing on the
dataset.write_dataset
side, it seems like there's some issue with runaway memory use on thescanner
side of things?The closest I could find online was this DuckDB issue duckdb/duckdb#7856 which in turn pointed to this arrow issue #31486 but this seems to hint more at a problem with
write_dataset
, which for me seemed ok once I replaced how I am reading in the data.Component(s)
Python
The text was updated successfully, but these errors were encountered: