Skip to content

Commit db82a1b

Browse files
committed
runtime: sysUsed spans after trimming
Currently, we mark a whole span as sysUsed before trimming, but this unnecessarily tells the OS that the trimmed section from the span is used when it may have been scavenged, if s was scavenged. Overall, this just makes invocations of sysUsed a little more fine-grained. It does come with the caveat that now heap_released needs to be managed a little more carefully in allocSpanLocked. In this case, we choose to (like before this change) negate any effect the span has on heap_released before trimming, then add it back if the trimmed part is scavengable. For #14045. Change-Id: Ifa384d989611398bfad3ca39d3bb595a5962a3ea Reviewed-on: https://go-review.googlesource.com/c/140198 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
1 parent 61d40c8 commit db82a1b

File tree

1 file changed

+20
-13
lines changed

1 file changed

+20
-13
lines changed

src/runtime/mheap.go

Lines changed: 20 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -884,19 +884,11 @@ HaveSpan:
884884
if s.npages < npage {
885885
throw("MHeap_AllocLocked - bad npages")
886886
}
887-
if s.scavenged {
888-
// sysUsed all the pages that are actually available
889-
// in the span, but only drop heap_released by the
890-
// actual amount of pages released. This helps ensure
891-
// that heap_released only increments and decrements
892-
// by the same amounts. It's also fine, because any
893-
// of the pages outside start and end wouldn't have been
894-
// sysUnused in the first place.
895-
sysUsed(unsafe.Pointer(s.base()), s.npages<<_PageShift)
896-
start, end := s.physPageBounds()
897-
memstats.heap_released -= uint64(end-start)
898-
s.scavenged = false
899-
}
887+
888+
// First, subtract any memory that was released back to
889+
// the OS from s. We will re-scavenge the trimmed section
890+
// if necessary.
891+
memstats.heap_released -= uint64(s.released())
900892

901893
if s.npages > npage {
902894
// Trim extra and put it back in the heap.
@@ -907,11 +899,26 @@ HaveSpan:
907899
h.setSpan(t.base(), t)
908900
h.setSpan(t.base()+t.npages*pageSize-1, t)
909901
t.needzero = s.needzero
902+
// If s was scavenged, then t may be scavenged.
903+
start, end := t.physPageBounds()
904+
if s.scavenged && start < end {
905+
memstats.heap_released += uint64(end-start)
906+
t.scavenged = true
907+
}
910908
s.state = mSpanManual // prevent coalescing with s
911909
t.state = mSpanManual
912910
h.freeSpanLocked(t, false, false, s.unusedsince)
913911
s.state = mSpanFree
914912
}
913+
// "Unscavenge" s only AFTER splitting so that
914+
// we only sysUsed whatever we actually need.
915+
if s.scavenged {
916+
// sysUsed all the pages that are actually available
917+
// in the span. Note that we don't need to decrement
918+
// heap_released since we already did so earlier.
919+
sysUsed(unsafe.Pointer(s.base()), s.npages<<_PageShift)
920+
s.scavenged = false
921+
}
915922
s.unusedsince = 0
916923

917924
h.setSpans(s.base(), npage, s)

0 commit comments

Comments
 (0)