Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

supported splitting packages #19

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

jxq1997216
Copy link

Now write package is supported splitting packages,You can use it like this
package.Write("writePath",maxPackageBytes);

@xPaw
Copy link
Member

xPaw commented Jul 13, 2024

This needs tests (ideally without writing gigabytes to disk though)

@jxq1997216
Copy link
Author

This needs tests (ideally without writing gigabytes to disk though)

Excuse me, may I ask what I need to do?

@xPaw
Copy link
Member

xPaw commented Jul 13, 2024

@@ -211,22 +234,47 @@ public void Write(Stream stream)

writer.Write(NullByte);

var fileTreeSize = stream.Position - headerSize;
//clear sub file
for (ushort i = 0; i < 999; i++)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this loop

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to delete the subcontracted files produced by previous tasks. I believe that when users reduce the maximum number of bytes and recreate the subcontracted files, the existence of the previous subcontracted files can be very confusing for users

Copy link
Member

@xPaw xPaw Jul 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's up to them to clean up then, not really our job to arbitrarily loop for 1k files. We only care that the _dir.vpk references correct chunk file which will be overwritten.

You're right, we shouldn't help users make decisions without authorization

@jxq1997216
Copy link
Author

像这样:https://github.com/ValveResourceFormat/ValvePak/blob/master/ValvePak/ValvePak.Test/WriteTest.cs

Okay, let me try something. I haven't written anything similar before


namespace SteamDatabase.ValvePak
{
internal sealed class WriteEntry(ushort archiveIndex, uint fileOffset, PackageEntry entry)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is needed. You can calculate the ArchiveIndex directly in AddFile.

You can look at Valve's packedstore.cpp to see how they handle adding files:

  • CPackedStore::AddFile has a bMultiChunk bool.
  • They keep track of m_nHighestChunkFileIndex and then increase it if the file offset is higher than m_nWriteChunkSize which defaults to 200 * 1024 * 1024 bytes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good,I should go take a look at packdstore.cpp,Can you tell me where it is?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Search for cstrike15_src

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it, thank you

const byte NullByte = 0;

// File tree data
bool isSingleFile = entries.Sum(s => s.TotalLength) + headerSize + 64 <= maxFileBytes;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like using maxFileBytes here, we should just have a bool to specify that we want to multi chunk.

This size calculation is also gonna be incorrect if we want to write file hashes.

@xPaw
Copy link
Member

xPaw commented Jul 13, 2024

We currently have this, but this ideally should be calculated for the chunks:

				// File hashes hash
				var fileHashesMD5 = MD5.HashData([]); // We did not write any file hashes
				writer.Write(fileHashesMD5);

Ref in valve's code: HashAllChunkFiles

@jxq1997216
Copy link
Author

We currently have this, but this ideally should be calculated for the chunks:

				// File hashes hash
				var fileHashesMD5 = MD5.HashData([]); // We did not write any file hashes
				writer.Write(fileHashesMD5);

Ref in valve's code: HashAllChunkFiles

Actually, I'm not quite sure how to calculate the hash value here. I think I should first take a look at cstrike15_strc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants