You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A common practice when working with strings is parsing text from disk.
This feature set would look to provide convenience methods of efficiently parsing massive text files and network streams while minimising allocations and garbage collection. In addition to this the goal would be to provide infrastructure for indexing files using bloom filters.
Currently .NET provides ReadOnlySequence<T> for efficiently processing streams, but working with it correctly is difficult, unintuitive and poorly documented. The goal would be to provide higher level API's that still give a good compromise with regards to performance
Benchmarks to add
Rope based Csv Parser implementation compared vs Naive string split parsing technique vs ReadOnlySequence<T> parsing.
Bloom Filter indexed file search vs Brute force search
Time to index a large file vs Time to re-index a large file
The text was updated successfully, but these errors were encountered:
A common practice when working with strings is parsing text from disk.
This feature set would look to provide convenience methods of efficiently parsing massive text files and network streams while minimising allocations and garbage collection. In addition to this the goal would be to provide infrastructure for indexing files using bloom filters.
Currently .NET provides
ReadOnlySequence<T>
for efficiently processing streams, but working with it correctly is difficult, unintuitive and poorly documented. The goal would be to provide higher level API's that still give a good compromise with regards to performanceBenchmarks to add
ReadOnlySequence<T>
parsing.The text was updated successfully, but these errors were encountered: