Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Routine Load for Iceberg tables #49956

Open
Samrose-Ahmed opened this issue Aug 19, 2024 · 2 comments
Open

Routine Load for Iceberg tables #49956

Samrose-Ahmed opened this issue Aug 19, 2024 · 2 comments

Comments

@Samrose-Ahmed
Copy link
Contributor

Feature request

Support routine load to load data to iceberg tables.

Is your feature request related to a problem? Please describe.

Use starrocks directly to write to starrocks from kafka without having to use kafka connect or separate write fleet.

Describe the solution you'd like

Reuse routine load infra, adapt for Iceberg tables.

Describe alternatives you've considered

Additional context

@jaogoy
Copy link
Contributor

jaogoy commented Aug 19, 2024

It'd be better to be implemented.
But, if every batch is too small, then the versions will be too much, thereforce the query performance on Iceberg tables will not be good, IMO.

And, can you share with me about your scenarios? Do you just want datalake analytics, and the query performance is not so much restricted to second level?

@Samrose-Ahmed
Copy link
Contributor Author

Yes you need to not commit excessively. I think around 1min-5min intervals are reasonable (that's often used with Flink/Iceberg as the checkpoint interval).

Second level is not necessary and would generate too many files with iceberg. In general, data/metadata gets compacted away so a few new files don't really affect performance too much as long as commit interval is reasonable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants