Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better handling of large files #6

Open
pfirsich opened this issue Jun 5, 2018 · 1 comment
Open

Better handling of large files #6

pfirsich opened this issue Jun 5, 2018 · 1 comment

Comments

@pfirsich
Copy link
Owner

pfirsich commented Jun 5, 2018

Saving takes a long time. Can this be optimized? Maybe save in chunks in a separate thread while gathering the data?

The viewer also doesn't handle really big captures well enough. On my PC (i5 6600K) I drop below 60FPS inside the viewer at about 25000 captured frames. The draw calls should be constant, irregardless of the number of frames. Also at around that number I get dangerously close to the LuaJIT memory limit. Maybe there is some way to save the data more efficiently?

@pfirsich
Copy link
Owner Author

pfirsich commented Jun 6, 2018

Definitely avoid running into the LuaJIT memory limit. Monitor the memory used and throw away the oldest frames in chunks when a certain threshold is crossed. Include a way to do it manually too and maybe add an extra way to throw away everything that has been captured so far.

Alloyed added a commit to Alloyed/jprof that referenced this issue Dec 20, 2018
Fixes pfirsich#6.

This is an opt-in feature: to enable, call prof.enableThreadedWrite() at
the start of your program. Then instead of saving each event on the main
thread, and doing all the serialization work there, each event will be
assigned to a pool of worker threads which will serialize each event in
chunks at the end of the program.

Potential improvements to this model could include:
* Writing serialized data to a byte buffer instead of a string. This
would save the cost of copying the chunk string between VMs, with the
added complexity of handling ownership of the buffer and potentially
having data that grows beyond the size of the buffer.
* Incremental serialization. Right now the worker threads wait until the
end of the program to start processing each event, but there's no reason
they can't do that work ahead of time in the background if it doesn't
affect the runtime of the main program (it might?)
* Handling file I/O on a background thread, or even possibly in the
worker threads themselves. Haven't thought about this one too much,
because prof.write() is typically called at the end of the program where
there's not much else going on
Alloyed added a commit to Alloyed/jprof that referenced this issue Dec 20, 2018
Fixes pfirsich#6.

This is an opt-in feature: to enable, call prof.enableThreadedWrite() at
the start of your program. Then instead of saving each event on the main
thread, and doing all the serialization work there, each event will be
assigned to a pool of worker threads which will serialize each event in
chunks at the end of the program.

Potential improvements to this model could include:
* Writing serialized data to a byte buffer instead of a string. This
would save the cost of copying the chunk string between VMs, with the
added complexity of handling ownership of the buffer and potentially
having data that grows beyond the size of the buffer.
* Incremental serialization. Right now the worker threads wait until the
end of the program to start processing each event, but there's no reason
they can't do that work ahead of time in the background if it doesn't
affect the runtime of the main program (it might?)
* Handling file I/O on a background thread, or even possibly in the
worker threads themselves. Haven't thought about this one too much,
because prof.write() is typically called at the end of the program where
there's not much else going on
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant