// Copyright 2009 The Go Authors. All rights reserved.// Use of this source code is governed by a BSD-style// license that can be found in the LICENSE file.// Garbage collector: type and heap bitmaps.//// Stack, data, and bss bitmaps//// Stack frames and global variables in the data and bss sections are// described by bitmaps with 1 bit per pointer-sized word. A "1" bit// means the word is a live pointer to be visited by the GC (referred to// as "pointer"). A "0" bit means the word should be ignored by GC// (referred to as "scalar", though it could be a dead pointer value).//// Heap bitmaps//// The heap bitmap comprises 1 bit for each pointer-sized word in the heap,// recording whether a pointer is stored in that word or not. This bitmap// is stored at the end of a span for small objects and is unrolled at// runtime from type metadata for all larger objects. Objects without// pointers have neither a bitmap nor associated type metadata.//// Bits in all cases correspond to words in little-endian order.//// For small objects, if s is the mspan for the span starting at "start",// then s.heapBits() returns a slice containing the bitmap for the whole span.// That is, s.heapBits()[0] holds the goarch.PtrSize*8 bits for the first// goarch.PtrSize*8 words from "start" through "start+63*ptrSize" in the span.// On a related note, small objects are always small enough that their bitmap// fits in goarch.PtrSize*8 bits, so writing out bitmap data takes two bitmap// writes at most (because object boundaries don't generally lie on// s.heapBits()[i] boundaries).//// For larger objects, if t is the type for the object starting at "start",// within some span whose mspan is s, then the bitmap at t.GCData is "tiled"// from "start" through "start+s.elemsize".// Specifically, the first bit of t.GCData corresponds to the word at "start",// the second to the word after "start", and so on up to t.PtrBytes. At t.PtrBytes,// we skip to "start+t.Size_" and begin again from there. This process is// repeated until we hit "start+s.elemsize".// This tiling algorithm supports array data, since the type always refers to// the element type of the array. Single objects are considered the same as// single-element arrays.// The tiling algorithm may scan data past the end of the compiler-recognized// object, but any unused data within the allocation slot (i.e. within s.elemsize)// is zeroed, so the GC just observes nil pointers.// Note that this "tiled" bitmap isn't stored anywhere; it is generated on-the-fly.//// For objects without their own span, the type metadata is stored in the first// word before the object at the beginning of the allocation slot. For objects// with their own span, the type metadata is stored in the mspan.//// The bitmap for small unallocated objects in scannable spans is not maintained// (can be junk).package runtimeimport ()const (// A malloc header is functionally a single type pointer, but // we need to use 8 here to ensure 8-byte alignment of allocations // on 32-bit platforms. It's wasteful, but a lot of code relies on // 8-byte alignment for 8-byte atomics. mallocHeaderSize = 8// The minimum object size that has a malloc header, exclusive. // // The size of this value controls overheads from the malloc header. // The minimum size is bound by writeHeapBitsSmall, which assumes that the // pointer bitmap for objects of a size smaller than this doesn't cross // more than one pointer-word boundary. This sets an upper-bound on this // value at the number of bits in a uintptr, multiplied by the pointer // size in bytes. // // We choose a value here that has a natural cutover point in terms of memory // overheads. This value just happens to be the maximum possible value this // can be. // // A span with heap bits in it will have 128 bytes of heap bits on 64-bit // platforms, and 256 bytes of heap bits on 32-bit platforms. The first size // class where malloc headers match this overhead for 64-bit platforms is // 512 bytes (8 KiB / 512 bytes * 8 bytes-per-header = 128 bytes of overhead). // On 32-bit platforms, this same point is the 256 byte size class // (8 KiB / 256 bytes * 8 bytes-per-header = 256 bytes of overhead). // // Guaranteed to be exactly at a size class boundary. The reason this value is // an exclusive minimum is subtle. Suppose we're allocating a 504-byte object // and its rounded up to 512 bytes for the size class. If minSizeForMallocHeader // is 512 and an inclusive minimum, then a comparison against minSizeForMallocHeader // by the two values would produce different results. In other words, the comparison // would not be invariant to size-class rounding. Eschewing this property means a // more complex check or possibly storing additional state to determine whether a // span has malloc headers. minSizeForMallocHeader = goarch.PtrSize * ptrBits)// heapBitsInSpan returns true if the size of an object implies its ptr/scalar// data is stored at the end of the span, and is accessible via span.heapBits.//// Note: this works for both rounded-up sizes (span.elemsize) and unrounded// type sizes because minSizeForMallocHeader is guaranteed to be at a size// class boundary.////go:nosplitfunc heapBitsInSpan( uintptr) bool {// N.B. minSizeForMallocHeader is an exclusive minimum so that this function is // invariant under size-class rounding on its input.return <= minSizeForMallocHeader}// typePointers is an iterator over the pointers in a heap object.//// Iteration through this type implements the tiling algorithm described at the// top of this file.type typePointers struct {// elem is the address of the current array element of type typ being iterated over. // Objects that are not arrays are treated as single-element arrays, in which case // this value does not change. elem uintptr// addr is the address the iterator is currently working from and describes // the address of the first word referenced by mask. addr uintptr// mask is a bitmask where each bit corresponds to pointer-words after addr. // Bit 0 is the pointer-word at addr, Bit 1 is the next word, and so on. // If a bit is 1, then there is a pointer at that word. // nextFast and next mask out bits in this mask as their pointers are processed. mask uintptr// typ is a pointer to the type information for the heap object's type. // This may be nil if the object is in a span where heapBitsInSpan(span.elemsize) is true. typ *_type}// typePointersOf returns an iterator over all heap pointers in the range [addr, addr+size).//// addr and addr+size must be in the range [span.base(), span.limit).//// Note: addr+size must be passed as the limit argument to the iterator's next method on// each iteration. This slightly awkward API is to allow typePointers to be destructured// by the compiler.//// nosplit because it is used during write barriers and must not be preempted.////go:nosplitfunc ( *mspan) (, uintptr) typePointers { := .objBase() := .typePointersOfUnchecked()if == && == .elemsize {return }return .fastForward(-.addr, +)}// typePointersOfUnchecked is like typePointersOf, but assumes addr is the base// of an allocation slot in a span (the start of the object if no header, the// header otherwise). It returns an iterator that generates all pointers// in the range [addr, addr+span.elemsize).//// nosplit because it is used during write barriers and must not be preempted.////go:nosplitfunc ( *mspan) ( uintptr) typePointers {const = falseif && .objBase() != {print("runtime: addr=", , " base=", .objBase(), "\n")throw("typePointersOfUnchecked consisting of non-base-address for object") } := .spanclassif .noscan() {returntypePointers{} }ifheapBitsInSpan(.elemsize) {// Handle header-less objects.returntypePointers{elem: , addr: , mask: .heapBitsSmallForAddr()} }// All of these objects have a header.var *_typeif .sizeclass() != 0 {// Pull the allocation header from the first word of the object. = *(**_type)(unsafe.Pointer()) += mallocHeaderSize } else { = .largeTypeif == nil {// Allow a nil type here for delayed zeroing. See mallocgc.returntypePointers{} } } := .GCDatareturntypePointers{elem: , addr: , mask: readUintptr(), typ: }}// typePointersOfType is like typePointersOf, but assumes addr points to one or more// contiguous instances of the provided type. The provided type must not be nil and// it must not have its type metadata encoded as a gcprog.//// It returns an iterator that tiles typ.GCData starting from addr. It's the caller's// responsibility to limit iteration.//// nosplit because its callers are nosplit and require all their callees to be nosplit.////go:nosplitfunc ( *mspan) ( *abi.Type, uintptr) typePointers {const = falseif && ( == nil || .Kind_&abi.KindGCProg != 0) {throw("bad type passed to typePointersOfType") }if .spanclass.noscan() {returntypePointers{} }// Since we have the type, pretend we have a header. := .GCDatareturntypePointers{elem: , addr: , mask: readUintptr(), typ: }}// nextFast is the fast path of next. nextFast is written to be inlineable and,// as the name implies, fast.//// Callers that are performance-critical should iterate using the following// pattern://// for {// var addr uintptr// if tp, addr = tp.nextFast(); addr == 0 {// if tp, addr = tp.next(limit); addr == 0 {// break// }// }// // Use addr.// ...// }//// nosplit because it is used during write barriers and must not be preempted.////go:nosplitfunc ( typePointers) () (typePointers, uintptr) {// TESTQ/JEQif .mask == 0 {return , 0 }// BSFQvarintifgoarch.PtrSize == 8 { = sys.TrailingZeros64(uint64(.mask)) } else { = sys.TrailingZeros32(uint32(.mask)) }// BTCQ .mask ^= uintptr(1) << ( & (ptrBits - 1))// LEAQ (XX)(XX*8)return , .addr + uintptr()*goarch.PtrSize}// next advances the pointers iterator, returning the updated iterator and// the address of the next pointer.//// limit must be the same each time it is passed to next.//// nosplit because it is used during write barriers and must not be preempted.////go:nosplitfunc ( typePointers) ( uintptr) (typePointers, uintptr) {for {if .mask != 0 {return .nextFast() }// Stop if we don't actually have type information.if .typ == nil {returntypePointers{}, 0 }// Advance to the next element if necessary.if .addr+goarch.PtrSize*ptrBits >= .elem+.typ.PtrBytes { .elem += .typ.Size_ .addr = .elem } else { .addr += ptrBits * goarch.PtrSize }// Check if we've exceeded the limit with the last update.if .addr >= {returntypePointers{}, 0 }// Grab more bits and try again. .mask = readUintptr(addb(.typ.GCData, (.addr-.elem)/goarch.PtrSize/8))if .addr+goarch.PtrSize*ptrBits > { := (.addr + goarch.PtrSize*ptrBits - ) / goarch.PtrSize .mask &^= ((1 << ()) - 1) << (ptrBits - ) } }}// fastForward moves the iterator forward by n bytes. n must be a multiple// of goarch.PtrSize. limit must be the same limit passed to next for this// iterator.//// nosplit because it is used during write barriers and must not be preempted.////go:nosplitfunc ( typePointers) (, uintptr) typePointers {// Basic bounds check. := .addr + if >= {returntypePointers{} }if .typ == nil {// Handle small objects. // Clear any bits before the target address. .mask &^= (1 << (( - .addr) / goarch.PtrSize)) - 1// Clear any bits past the limit.if .addr+goarch.PtrSize*ptrBits > { := (.addr + goarch.PtrSize*ptrBits - ) / goarch.PtrSize .mask &^= ((1 << ()) - 1) << (ptrBits - ) }return }// Move up elem and addr. // Offsets within an element are always at a ptrBits*goarch.PtrSize boundary.if >= .typ.Size_ {// elem needs to be moved to the element containing // tp.addr + n. := .elem .elem += (.addr - .elem + ) / .typ.Size_ * .typ.Size_ .addr = .elem + alignDown(-(.elem-), ptrBits*goarch.PtrSize) } else { .addr += alignDown(, ptrBits*goarch.PtrSize) }if .addr-.elem >= .typ.PtrBytes {// We're starting in the non-pointer area of an array. // Move up to the next element. .elem += .typ.Size_ .addr = .elem .mask = readUintptr(.typ.GCData)// We may have exceeded the limit after this. Bail just like next does.if .addr >= {returntypePointers{} } } else {// Grab the mask, but then clear any bits before the target address and any // bits over the limit. .mask = readUintptr(addb(.typ.GCData, (.addr-.elem)/goarch.PtrSize/8)) .mask &^= (1 << (( - .addr) / goarch.PtrSize)) - 1 }if .addr+goarch.PtrSize*ptrBits > { := (.addr + goarch.PtrSize*ptrBits - ) / goarch.PtrSize .mask &^= ((1 << ()) - 1) << (ptrBits - ) }return}// objBase returns the base pointer for the object containing addr in span.//// Assumes that addr points into a valid part of span (span.base() <= addr < span.limit).////go:nosplitfunc ( *mspan) ( uintptr) uintptr {return .base() + .objIndex()*.elemsize}// bulkBarrierPreWrite executes a write barrier// for every pointer slot in the memory range [src, src+size),// using pointer/scalar information from [dst, dst+size).// This executes the write barriers necessary before a memmove.// src, dst, and size must be pointer-aligned.// The range [dst, dst+size) must lie within a single object.// It does not perform the actual writes.//// As a special case, src == 0 indicates that this is being used for a// memclr. bulkBarrierPreWrite will pass 0 for the src of each write// barrier.//// Callers should call bulkBarrierPreWrite immediately before// calling memmove(dst, src, size). This function is marked nosplit// to avoid being preempted; the GC must not stop the goroutine// between the memmove and the execution of the barriers.// The caller is also responsible for cgo pointer checks if this// may be writing Go pointers into non-Go memory.//// Pointer data is not maintained for allocations containing// no pointers at all; any caller of bulkBarrierPreWrite must first// make sure the underlying allocation contains pointers, usually// by checking typ.PtrBytes.//// The typ argument is the type of the space at src and dst (and the// element type if src and dst refer to arrays) and it is optional.// If typ is nil, the barrier will still behave as expected and typ// is used purely as an optimization. However, it must be used with// care.//// If typ is not nil, then src and dst must point to one or more values// of type typ. The caller must ensure that the ranges [src, src+size)// and [dst, dst+size) refer to one or more whole values of type src and// dst (leaving off the pointerless tail of the space is OK). If this// precondition is not followed, this function will fail to scan the// right pointers.//// When in doubt, pass nil for typ. That is safe and will always work.//// Callers must perform cgo checks if goexperiment.CgoCheck2.////go:nosplitfunc bulkBarrierPreWrite(, , uintptr, *abi.Type) {if (||)&(goarch.PtrSize-1) != 0 {throw("bulkBarrierPreWrite: unaligned arguments") }if !writeBarrier.enabled {return } := spanOf()if == nil {// If dst is a global, use the data or BSS bitmaps to // execute write barriers.for , := rangeactiveModules() {if .data <= && < .edata {bulkBarrierBitmap(, , , -.data, .gcdatamask.bytedata)return } }for , := rangeactiveModules() {if .bss <= && < .ebss {bulkBarrierBitmap(, , , -.bss, .gcbssmask.bytedata)return } }return } elseif .state.get() != mSpanInUse || < .base() || .limit <= {// dst was heap memory at some point, but isn't now. // It can't be a global. It must be either our stack, // or in the case of direct channel sends, it could be // another stack. Either way, no need for barriers. // This will also catch if dst is in a freed span, // though that should never have.return } := &getg().m.p.ptr().wbBuf// Double-check that the bitmaps generated in the two possible paths match.const = falseif {doubleCheckTypePointersOfType(, , , ) }vartypePointersif != nil && .Kind_&abi.KindGCProg == 0 { = .typePointersOfType(, ) } else { = .typePointersOf(, ) }if == 0 {for {varuintptrif , = .next( + ); == 0 {break } := (*uintptr)(unsafe.Pointer()) := .get1() [0] = * } } else {for {varuintptrif , = .next( + ); == 0 {break } := (*uintptr)(unsafe.Pointer()) := (*uintptr)(unsafe.Pointer( + ( - ))) := .get2() [0] = * [1] = * } }}// bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but// does not execute write barriers for [dst, dst+size).//// In addition to the requirements of bulkBarrierPreWrite// callers need to ensure [dst, dst+size) is zeroed.//// This is used for special cases where e.g. dst was just// created and zeroed with malloc.//// The type of the space can be provided purely as an optimization.// See bulkBarrierPreWrite's comment for more details -- use this// optimization with great care.////go:nosplitfunc bulkBarrierPreWriteSrcOnly(, , uintptr, *abi.Type) {if (||)&(goarch.PtrSize-1) != 0 {throw("bulkBarrierPreWrite: unaligned arguments") }if !writeBarrier.enabled {return } := &getg().m.p.ptr().wbBuf := spanOf()// Double-check that the bitmaps generated in the two possible paths match.const = falseif {doubleCheckTypePointersOfType(, , , ) }vartypePointersif != nil && .Kind_&abi.KindGCProg == 0 { = .typePointersOfType(, ) } else { = .typePointersOf(, ) }for {varuintptrif , = .next( + ); == 0 {break } := (*uintptr)(unsafe.Pointer( - + )) := .get1() [0] = * }}// initHeapBits initializes the heap bitmap for a span.//// TODO(mknyszek): This should set the heap bits for single pointer// allocations eagerly to avoid calling heapSetType at allocation time,// just to write one bit.func ( *mspan) ( bool) {if (!.spanclass.noscan() && heapBitsInSpan(.elemsize)) || .isUserArenaChunk { := .heapBits()clear() }}// heapBits returns the heap ptr/scalar bits stored at the end of the span for// small object spans and heap arena spans.//// Note that the uintptr of each element means something different for small object// spans and for heap arena spans. Small object spans are easy: they're never interpreted// as anything but uintptr, so they're immune to differences in endianness. However, the// heapBits for user arena spans is exposed through a dummy type descriptor, so the byte// ordering needs to match the same byte ordering the compiler would emit. The compiler always// emits the bitmap data in little endian byte ordering, so on big endian platforms these// uintptrs will have their byte orders swapped from what they normally would be.//// heapBitsInSpan(span.elemsize) or span.isUserArenaChunk must be true.////go:nosplitfunc ( *mspan) () []uintptr {const = falseif && !.isUserArenaChunk {if .spanclass.noscan() {throw("heapBits called for noscan") }if .elemsize > minSizeForMallocHeader {throw("heapBits called for span class that should have a malloc header") } }// Find the bitmap at the end of the span. // // Nearly every span with heap bits is exactly one page in size. Arenas are the only exception.if .npages == 1 {// This will be inlined and constant-folded down.returnheapBitsSlice(.base(), pageSize) }returnheapBitsSlice(.base(), .npages*pageSize)}// Helper for constructing a slice for the span's heap bits.////go:nosplitfunc heapBitsSlice(, uintptr) []uintptr { := / goarch.PtrSize / 8 := int( / goarch.PtrSize)varnotInHeapSlice = notInHeapSlice{(*notInHeap)(unsafe.Pointer( + - )), , }return *(*[]uintptr)(unsafe.Pointer(&))}// heapBitsSmallForAddr loads the heap bits for the object stored at addr from span.heapBits.//// addr must be the base pointer of an object in the span. heapBitsInSpan(span.elemsize)// must be true.////go:nosplitfunc ( *mspan) ( uintptr) uintptr { := .npages * pageSize := / goarch.PtrSize / 8 := (*byte)(unsafe.Pointer(.base() + - ))// These objects are always small enough that their bitmaps // fit in a single word, so just load the word or two we need. // // Mirrors mspan.writeHeapBitsSmall. // // We should be using heapBits(), but unfortunately it introduces // both bounds checks panics and throw which causes us to exceed // the nosplit limit in quite a few cases. := ( - .base()) / goarch.PtrSize / ptrBits := ( - .base()) / goarch.PtrSize % ptrBits := .elemsize / goarch.PtrSize := (*uintptr)(unsafe.Pointer(addb(, goarch.PtrSize*(+0)))) := (*uintptr)(unsafe.Pointer(addb(, goarch.PtrSize*(+1))))varuintptrif + > ptrBits {// Two reads. := ptrBits - := - = * >> |= (* & ((1 << ) - 1)) << } else {// One read. = (* >> ) & ((1 << ) - 1) }return}// writeHeapBitsSmall writes the heap bits for small objects whose ptr/scalar data is// stored as a bitmap at the end of the span.//// Assumes dataSize is <= ptrBits*goarch.PtrSize. x must be a pointer into the span.// heapBitsInSpan(dataSize) must be true. dataSize must be >= typ.Size_.////go:nosplitfunc ( *mspan) (, uintptr, *_type) ( uintptr) {// The objects here are always really small, so a single load is sufficient. := readUintptr(.GCData)// Create repetitions of the bitmap if we have a small array. := .elemsize / goarch.PtrSize = .PtrBytes := switch .Size_ {casegoarch.PtrSize: = (1 << ( / goarch.PtrSize)) - 1default:for := .Size_; < ; += .Size_ { |= << ( / goarch.PtrSize) += .Size_ } }// Since we're never writing more than one uintptr's worth of bits, we're either going // to do one or two writes. := .heapBits() := ( - .base()) / goarch.PtrSize := / ptrBits := % ptrBitsif + > ptrBits {// Two writes. := ptrBits - := - [+0] = [+0]&(^uintptr(0)>>) | ( << ) [+1] = [+1]&^((1<<)-1) | ( >> ) } else {// One write. [] = ([] &^ (((1 << ) - 1) << )) | ( << ) }const = falseif { := .heapBitsSmallForAddr()if != {print("runtime: x=", hex(), " i=", , " j=", , " bits=", , "\n")print("runtime: dataSize=", , " typ.Size_=", .Size_, " typ.PtrBytes=", .PtrBytes, "\n")print("runtime: src0=", hex(), " src=", hex(), " srcRead=", hex(), "\n")throw("bad pointer bits written for small object") } }return}// heapSetType records that the new allocation [x, x+size)// holds in [x, x+dataSize) one or more values of type typ.// (The number of values is given by dataSize / typ.Size.)// If dataSize < size, the fragment [x+dataSize, x+size) is// recorded as non-pointer data.// It is known that the type has pointers somewhere;// malloc does not call heapSetType when there are no pointers.//// There can be read-write races between heapSetType and things// that read the heap metadata like scanobject. However, since// heapSetType is only used for objects that have not yet been// made reachable, readers will ignore bits being modified by this// function. This does mean this function cannot transiently modify// shared memory that belongs to neighboring objects. Also, on weakly-ordered// machines, callers must execute a store/store (publication) barrier// between calling this function and making the object reachable.func heapSetType(, uintptr, *_type, **_type, *mspan) ( uintptr) {const = false := if == nil {if && (!heapBitsInSpan() || !heapBitsInSpan(.elemsize)) {throw("tried to write heap bits, but no heap bits in span") }// Handle the case where we have no malloc header. = .writeHeapBitsSmall(, , ) } else {if .Kind_&abi.KindGCProg != 0 {// Allocate space to unroll the gcprog. This space will consist of // a dummy _type value and the unrolled gcprog. The dummy _type will // refer to the bitmap, and the mspan will refer to the dummy _type.if .spanclass.sizeclass() != 0 {throw("GCProg for type that isn't large") } := alignUp(unsafe.Sizeof(_type{}), goarch.PtrSize) := += alignUp(.PtrBytes/goarch.PtrSize/8, goarch.PtrSize) := alignUp(, pageSize) / pageSizevar *mspansystemstack(func() { = mheap_.allocManual(, spanAllocPtrScalarBits)memclrNoHeapPointers(unsafe.Pointer(.base()), .npages*pageSize) })// Write a dummy _type in the new space. // // We only need to write size, PtrBytes, and GCData, since that's all // the GC cares about. = (*_type)(unsafe.Pointer(.base())) .Size_ = .Size_ .PtrBytes = .PtrBytes .GCData = (*byte)(add(unsafe.Pointer(.base()), )) .TFlag = abi.TFlagUnrolledBitmap// Expand the GC program into space reserved at the end of the new span.runGCProg(addb(.GCData, 4), .GCData) }// Write out the header. * = = .elemsize }if {doubleCheckHeapPointers(, , , , )// To exercise the less common path more often, generate // a random interior pointer and make sure iterating from // that point works correctly too. := .elemsizeif == nil { = } := alignUp(uintptr(cheaprand())%, goarch.PtrSize) := - if == 0 { -= goarch.PtrSize += goarch.PtrSize } := + -= alignDown(uintptr(cheaprand())%, goarch.PtrSize)if == 0 { = goarch.PtrSize }// Round up the type to the size of the type. = ( + .Size_ - 1) / .Size_ * .Size_if + > + { = + - }doubleCheckHeapPointersInterior(, , , , , , ) }return}func doubleCheckHeapPointers(, uintptr, *_type, **_type, *mspan) {// Check that scanning the full object works. := .typePointersOfUnchecked(.objBase()) := .elemsizeif == nil { = } := falsefor := uintptr(0); < ; += goarch.PtrSize {// Compute the pointer bit we want at offset i. := falseif < .elemsize { := % .Size_if < .PtrBytes { := / goarch.PtrSize = *addb(.GCData, /8)>>(%8)&1 != 0 } }if {varuintptr , = .next( + .elemsize)if == 0 {println("runtime: found bad iterator") }if != + {print("runtime: addr=", hex(), " x+i=", hex(+), "\n") = true } } }if ! {varuintptr , = .next( + .elemsize)if == 0 {return }println("runtime: extra pointer:", hex()) }print("runtime: hasHeader=", != nil, " typ.Size_=", .Size_, " hasGCProg=", .Kind_&abi.KindGCProg != 0, "\n")print("runtime: x=", hex(), " dataSize=", , " elemsize=", .elemsize, "\n")print("runtime: typ=", unsafe.Pointer(), " typ.PtrBytes=", .PtrBytes, "\n")print("runtime: limit=", hex(+.elemsize), "\n") = .typePointersOfUnchecked()dumpTypePointers()for {varuintptrif , = .next( + .elemsize); == 0 {println("runtime: would've stopped here")dumpTypePointers()break }print("runtime: addr=", hex(), "\n")dumpTypePointers() }throw("heapSetType: pointer entry not correct")}func doubleCheckHeapPointersInterior(, , , uintptr, *_type, **_type, *mspan) { := falseif < {print("runtime: interior=", hex(), " x=", hex(), "\n")throw("found bad interior pointer") } := - := .typePointersOf(, )for := ; < +; += goarch.PtrSize {// Compute the pointer bit we want at offset i. := falseif < .elemsize { := % .Size_if < .PtrBytes { := / goarch.PtrSize = *addb(.GCData, /8)>>(%8)&1 != 0 } }if {varuintptr , = .next( + )if == 0 {println("runtime: found bad iterator") = true }if != + {print("runtime: addr=", hex(), " x+i=", hex(+), "\n") = true } } }if ! {varuintptr , = .next( + )if == 0 {return }println("runtime: extra pointer:", hex()) }print("runtime: hasHeader=", != nil, " typ.Size_=", .Size_, "\n")print("runtime: x=", hex(), " dataSize=", , " elemsize=", .elemsize, " interior=", hex(), " size=", , "\n")print("runtime: limit=", hex(+), "\n") = .typePointersOf(, )dumpTypePointers()for {varuintptrif , = .next( + ); == 0 {println("runtime: would've stopped here")dumpTypePointers()break }print("runtime: addr=", hex(), "\n")dumpTypePointers() }print("runtime: want: ")for := ; < +; += goarch.PtrSize {// Compute the pointer bit we want at offset i. := falseif < { := % .Size_if < .PtrBytes { := / goarch.PtrSize = *addb(.GCData, /8)>>(%8)&1 != 0 } }if {print("1") } else {print("0") } }println()throw("heapSetType: pointer entry not correct")}//go:nosplitfunc doubleCheckTypePointersOfType( *mspan, *_type, , uintptr) {if == nil || .Kind_&abi.KindGCProg != 0 {return }if .Kind_&abi.KindMask == abi.Interface {// Interfaces are unfortunately inconsistently handled // when it comes to the type pointer, so it's easy to // produce a lot of false positives here.return } := .typePointersOfType(, ) := .typePointersOf(, ) := falsefor {var , uintptr , = .next( + ) , = .next( + )if != { = truebreak }if == 0 {break } }if { := .typePointersOfType(, ) := .typePointersOf(, )print("runtime: addr=", hex(), " size=", , "\n")print("runtime: type=", toRType().string(), "\n")dumpTypePointers()dumpTypePointers()for {var , uintptr , = .next( + ) , = .next( + )print("runtime: ", hex(), " ", hex(), "\n")if == 0 && == 0 {break } }throw("mismatch between typePointersOfType and typePointersOf") }}func dumpTypePointers( typePointers) {print("runtime: tp.elem=", hex(.elem), " tp.typ=", unsafe.Pointer(.typ), "\n")print("runtime: tp.addr=", hex(.addr), " tp.mask=")for := uintptr(0); < ptrBits; ++ {if .mask&(uintptr(1)<<) != 0 {print("1") } else {print("0") } }println()}// addb returns the byte pointer p+n.////go:nowritebarrier//go:nosplitfunc addb( *byte, uintptr) *byte {// Note: wrote out full expression instead of calling add(p, n) // to reduce the number of temporaries generated by the // compiler for this trivial expression during inlining.return (*byte)(unsafe.Pointer(uintptr(unsafe.Pointer()) + ))}// subtractb returns the byte pointer p-n.////go:nowritebarrier//go:nosplitfunc subtractb( *byte, uintptr) *byte {// Note: wrote out full expression instead of calling add(p, -n) // to reduce the number of temporaries generated by the // compiler for this trivial expression during inlining.return (*byte)(unsafe.Pointer(uintptr(unsafe.Pointer()) - ))}// add1 returns the byte pointer p+1.////go:nowritebarrier//go:nosplitfunc add1( *byte) *byte {// Note: wrote out full expression instead of calling addb(p, 1) // to reduce the number of temporaries generated by the // compiler for this trivial expression during inlining.return (*byte)(unsafe.Pointer(uintptr(unsafe.Pointer()) + 1))}// subtract1 returns the byte pointer p-1.//// nosplit because it is used during write barriers and must not be preempted.////go:nowritebarrier//go:nosplitfunc subtract1( *byte) *byte {// Note: wrote out full expression instead of calling subtractb(p, 1) // to reduce the number of temporaries generated by the // compiler for this trivial expression during inlining.return (*byte)(unsafe.Pointer(uintptr(unsafe.Pointer()) - 1))}// markBits provides access to the mark bit for an object in the heap.// bytep points to the byte holding the mark bit.// mask is a byte with a single bit set that can be &ed with *bytep// to see if the bit has been set.// *m.byte&m.mask != 0 indicates the mark bit is set.// index can be used along with span information to generate// the address of the object in the heap.// We maintain one set of mark bits for allocation and one for// marking purposes.type markBits struct { bytep *uint8 mask uint8 index uintptr}//go:nosplitfunc ( *mspan) ( uintptr) markBits { , := .allocBits.bitp()returnmarkBits{, , }}// refillAllocCache takes 8 bytes s.allocBits starting at whichByte// and negates them so that ctz (count trailing zeros) instructions// can be used. It then places these 8 bytes into the cached 64 bit// s.allocCache.func ( *mspan) ( uint16) { := (*[8]uint8)(unsafe.Pointer(.allocBits.bytep(uintptr()))) := uint64(0) |= uint64([0]) |= uint64([1]) << (1 * 8) |= uint64([2]) << (2 * 8) |= uint64([3]) << (3 * 8) |= uint64([4]) << (4 * 8) |= uint64([5]) << (5 * 8) |= uint64([6]) << (6 * 8) |= uint64([7]) << (7 * 8) .allocCache = ^}// nextFreeIndex returns the index of the next free object in s at// or after s.freeindex.// There are hardware instructions that can be used to make this// faster if profiling warrants it.func ( *mspan) () uint16 { := .freeindex := .nelemsif == {return }if > {throw("s.freeindex > s.nelems") } := .allocCache := sys.TrailingZeros64()for == 64 {// Move index to start of next cached bits. = ( + 64) &^ (64 - 1)if >= { .freeindex = return } := / 8// Refill s.allocCache with the next 64 alloc bits. .refillAllocCache() = .allocCache = sys.TrailingZeros64()// nothing available in cached bits // grab the next 8 bytes and try again. } := + uint16()if >= { .freeindex = return } .allocCache >>= uint( + 1) = + 1if %64 == 0 && != {// We just incremented s.freeindex so it isn't 0. // As each 1 in s.allocCache was encountered and used for allocation // it was shifted away. At this point s.allocCache contains all 0s. // Refill s.allocCache so that it corresponds // to the bits at s.allocBits starting at s.freeindex. := / 8 .refillAllocCache() } .freeindex = return}// isFree reports whether the index'th object in s is unallocated.//// The caller must ensure s.state is mSpanInUse, and there must have// been no preemption points since ensuring this (which could allow a// GC transition, which would allow the state to change).func ( *mspan) ( uintptr) bool {if < uintptr(.freeIndexForScan) {returnfalse } , := .allocBits.bitp()return *& == 0}// divideByElemSize returns n/s.elemsize.// n must be within [0, s.npages*_PageSize),// or may be exactly s.npages*_PageSize// if s.elemsize is from sizeclasses.go.//// nosplit, because it is called by objIndex, which is nosplit////go:nosplitfunc ( *mspan) ( uintptr) uintptr {const = false// See explanation in mksizeclasses.go's computeDivMagic. := uintptr((uint64() * uint64(.divMul)) >> 32)if && != /.elemsize {println(, "/", .elemsize, "should be", /.elemsize, "but got", )throw("bad magic division") }return}// nosplit, because it is called by other nosplit code like findObject////go:nosplitfunc ( *mspan) ( uintptr) uintptr {return .divideByElemSize( - .base())}func markBitsForAddr( uintptr) markBits { := spanOf() := .objIndex()return .markBitsForIndex()}func ( *mspan) ( uintptr) markBits { , := .gcmarkBits.bitp()returnmarkBits{, , }}func ( *mspan) () markBits {returnmarkBits{&.gcmarkBits.x, uint8(1), 0}}// isMarked reports whether mark bit m is set.func ( markBits) () bool {return *.bytep&.mask != 0}// setMarked sets the marked bit in the markbits, atomically.func ( markBits) () {// Might be racing with other updates, so use atomic update always. // We used to be clever here and use a non-atomic update in certain // cases, but it's not worth the risk.atomic.Or8(.bytep, .mask)}// setMarkedNonAtomic sets the marked bit in the markbits, non-atomically.func ( markBits) () { *.bytep |= .mask}// clearMarked clears the marked bit in the markbits, atomically.func ( markBits) () {// Might be racing with other updates, so use atomic update always. // We used to be clever here and use a non-atomic update in certain // cases, but it's not worth the risk.atomic.And8(.bytep, ^.mask)}// markBitsForSpan returns the markBits for the span base address base.func markBitsForSpan( uintptr) ( markBits) { = markBitsForAddr()if .mask != 1 {throw("markBitsForSpan: unaligned start") }return}// advance advances the markBits to the next object in the span.func ( *markBits) () {if .mask == 1<<7 { .bytep = (*uint8)(unsafe.Pointer(uintptr(unsafe.Pointer(.bytep)) + 1)) .mask = 1 } else { .mask = .mask << 1 } .index++}// clobberdeadPtr is a special value that is used by the compiler to// clobber dead stack slots, when -clobberdead flag is set.const clobberdeadPtr = uintptr(0xdeaddead | 0xdeaddead<<((^uintptr(0)>>63)*32))// badPointer throws bad pointer in heap panic.func badPointer( *mspan, , , uintptr) {// Typically this indicates an incorrect use // of unsafe or cgo to store a bad pointer in // the Go heap. It may also indicate a runtime // bug. // // TODO(austin): We could be more aggressive // and detect pointers to unallocated objects // in allocated spans.printlock()print("runtime: pointer ", hex())if != nil { := .state.get()if != mSpanInUse {print(" to unallocated span") } else {print(" to unused region of span") }print(" span.base()=", hex(.base()), " span.limit=", hex(.limit), " span.state=", ) }print("\n")if != 0 {print("runtime: found in object at *(", hex(), "+", hex(), ")\n")gcDumpObject("object", , ) }getg().m.traceback = 2throw("found bad pointer in Go heap (incorrect use of unsafe or cgo?)")}// findObject returns the base address for the heap object containing// the address p, the object's span, and the index of the object in s.// If p does not point into a heap object, it returns base == 0.//// If p points is an invalid heap pointer and debug.invalidptr != 0,// findObject panics.//// refBase and refOff optionally give the base address of the object// in which the pointer p was found and the byte offset at which it// was found. These are used for error reporting.//// It is nosplit so it is safe for p to be a pointer to the current goroutine's stack.// Since p is a uintptr, it would not be adjusted if the stack were to move.//// findObject should be an internal detail,// but widely used packages access it using linkname.// Notable members of the hall of shame include:// - github.com/bytedance/sonic//// Do not remove or change the type signature.// See go.dev/issue/67401.////go:linkname findObject//go:nosplitfunc findObject(, , uintptr) ( uintptr, *mspan, uintptr) { = spanOf()// If s is nil, the virtual address has never been part of the heap. // This pointer may be to some mmap'd region, so we allow it.if == nil {if (GOARCH == "amd64" || GOARCH == "arm64") && == clobberdeadPtr && debug.invalidptr != 0 {// Crash if clobberdeadPtr is seen. Only on AMD64 and ARM64 for now, // as they are the only platform where compiler's clobberdead mode is // implemented. On these platforms clobberdeadPtr cannot be a valid address.badPointer(, , , ) }return }// If p is a bad pointer, it may not be in s's bounds. // // Check s.state to synchronize with span initialization // before checking other fields. See also spanOfHeap.if := .state.get(); != mSpanInUse || < .base() || >= .limit {// Pointers into stacks are also ok, the runtime manages these explicitly.if == mSpanManual {return }// The following ensures that we are rigorous about what data // structures hold valid pointers.ifdebug.invalidptr != 0 {badPointer(, , , ) }return } = .objIndex() = .base() + *.elemsizereturn}// reflect_verifyNotInHeapPtr reports whether converting the not-in-heap pointer into a unsafe.Pointer is ok.////go:linkname reflect_verifyNotInHeapPtr reflect.verifyNotInHeapPtrfunc reflect_verifyNotInHeapPtr( uintptr) bool {// Conversion to a pointer is ok as long as findObject above does not call badPointer. // Since we're already promised that p doesn't point into the heap, just disallow heap // pointers and the special clobbered pointer.returnspanOf() == nil && != clobberdeadPtr}const ptrBits = 8 * goarch.PtrSize// bulkBarrierBitmap executes write barriers for copying from [src,// src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is// assumed to start maskOffset bytes into the data covered by the// bitmap in bits (which may not be a multiple of 8).//// This is used by bulkBarrierPreWrite for writes to data and BSS.////go:nosplitfunc bulkBarrierBitmap(, , , uintptr, *uint8) { := / goarch.PtrSize = addb(, /8) := uint8(1) << ( % 8) := &getg().m.p.ptr().wbBuffor := uintptr(0); < ; += goarch.PtrSize {if == 0 { = addb(, 1)if * == 0 {// Skip 8 words. += 7 * goarch.PtrSizecontinue } = 1 }if *& != 0 { := (*uintptr)(unsafe.Pointer( + ))if == 0 { := .get1() [0] = * } else { := (*uintptr)(unsafe.Pointer( + )) := .get2() [0] = * [1] = * } } <<= 1 }}// typeBitsBulkBarrier executes a write barrier for every// pointer that would be copied from [src, src+size) to [dst,// dst+size) by a memmove using the type bitmap to locate those// pointer slots.//// The type typ must correspond exactly to [src, src+size) and [dst, dst+size).// dst, src, and size must be pointer-aligned.// The type typ must have a plain bitmap, not a GC program.// The only use of this function is in channel sends, and the// 64 kB channel element limit takes care of this for us.//// Must not be preempted because it typically runs right before memmove,// and the GC must observe them as an atomic action.//// Callers must perform cgo checks if goexperiment.CgoCheck2.////go:nosplitfunc typeBitsBulkBarrier( *_type, , , uintptr) {if == nil {throw("runtime: typeBitsBulkBarrier without type") }if .Size_ != {println("runtime: typeBitsBulkBarrier with type ", toRType().string(), " of size ", .Size_, " but memory size", )throw("runtime: invalid typeBitsBulkBarrier") }if .Kind_&abi.KindGCProg != 0 {println("runtime: typeBitsBulkBarrier with type ", toRType().string(), " with GC prog")throw("runtime: invalid typeBitsBulkBarrier") }if !writeBarrier.enabled {return } := .GCData := &getg().m.p.ptr().wbBufvaruint32for := uintptr(0); < .PtrBytes; += goarch.PtrSize {if &(goarch.PtrSize*8-1) == 0 { = uint32(*) = addb(, 1) } else { = >> 1 }if &1 != 0 { := (*uintptr)(unsafe.Pointer( + )) := (*uintptr)(unsafe.Pointer( + )) := .get2() [0] = * [1] = * } }}// countAlloc returns the number of objects allocated in span s by// scanning the mark bitmap.func ( *mspan) () int { := 0 := divRoundUp(uintptr(.nelems), 8)// Iterate over each 8-byte chunk and count allocations // with an intrinsic. Note that newMarkBits guarantees that // gcmarkBits will be 8-byte aligned, so we don't have to // worry about edge cases, irrelevant bits will simply be zero.for := uintptr(0); < ; += 8 {// Extract 64 bits from the byte pointer and get a OnesCount. // Note that the unsafe cast here doesn't preserve endianness, // but that's OK. We only care about how many bits are 1, not // about the order we discover them in. := *(*uint64)(unsafe.Pointer(.gcmarkBits.bytep())) += sys.OnesCount64() }return}// Read the bytes starting at the aligned pointer p into a uintptr.// Read is little-endian.func readUintptr( *byte) uintptr { := *(*uintptr)(unsafe.Pointer())ifgoarch.BigEndian {ifgoarch.PtrSize == 8 {returnuintptr(sys.Bswap64(uint64())) }returnuintptr(sys.Bswap32(uint32())) }return}var debugPtrmask struct { lock mutex data *byte}// progToPointerMask returns the 1-bit pointer mask output by the GC program prog.// size the size of the region described by prog, in bytes.// The resulting bitvector will have no more than size/goarch.PtrSize bits.func progToPointerMask( *byte, uintptr) bitvector { := (/goarch.PtrSize + 7) / 8 := (*[1 << 30]byte)(persistentalloc(+1, 1, &memstats.buckhash_sys))[:+1] [len()-1] = 0xa1// overflow check sentinel = runGCProg(, &[0])if [len()-1] != 0xa1 {throw("progToPointerMask: overflow") }returnbitvector{int32(), &[0]}}// Packed GC pointer bitmaps, aka GC programs.//// For large types containing arrays, the type information has a// natural repetition that can be encoded to save space in the// binary and in the memory representation of the type information.//// The encoding is a simple Lempel-Ziv style bytecode machine// with the following instructions://// 00000000: stop// 0nnnnnnn: emit n bits copied from the next (n+7)/8 bytes// 10000000 n c: repeat the previous n bits c times; n, c are varints// 1nnnnnnn c: repeat the previous n bits c times; c is a varint// runGCProg returns the number of 1-bit entries written to memory.func runGCProg(, *byte) uintptr { := // Bits waiting to be written to memory.varuintptrvaruintptr := :for {// Flush accumulated full bytes. // The rest of the loop assumes that nbits <= 7.for ; >= 8; -= 8 { * = uint8() = add1() >>= 8 }// Process one instruction. := uintptr(*) = add1() := & 0x7Fif &0x80 == 0 {// Literal bits; n == 0 means end of program.if == 0 {// Program is over.break } := / 8for := uintptr(0); < ; ++ { |= uintptr(*) << = add1() * = uint8() = add1() >>= 8 }if %= 8; > 0 { |= uintptr(*) << = add1() += }continue }// Repeat. If n == 0, it is encoded in a varint in the next bytes.if == 0 {for := uint(0); ; += 7 { := uintptr(*) = add1() |= ( & 0x7F) << if &0x80 == 0 {break } } }// Count is encoded in a varint in the next bytes. := uintptr(0)for := uint(0); ; += 7 { := uintptr(*) = add1() |= ( & 0x7F) << if &0x80 == 0 {break } } *= // now total number of bits to copy// If the number of bits being repeated is small, load them // into a register and use that register for the entire loop // instead of repeatedly reading from memory. // Handling fewer than 8 bits here makes the general loop simpler. // The cutoff is goarch.PtrSize*8 - 7 to guarantee that when we add // the pattern to a bit buffer holding at most 7 bits (a partial byte) // it will not overflow. := const = goarch.PtrSize*8 - 7if <= {// Start with bits in output buffer. := := // If we need more bits, fetch them from memory. = subtract1()for < { <<= 8 |= uintptr(*) = subtract1() += 8 }// We started with the whole bit output buffer, // and then we loaded bits from whole bytes. // Either way, we might now have too many instead of too few. // Discard the extra.if > { >>= - = }// Replicate pattern to at most maxBits.if == 1 {// One bit being repeated. // If the bit is 1, make the pattern all 1s. // If the bit is 0, the pattern is already all 0s, // but we can claim that the number of bits // in the word is equal to the number we need (c), // because right shift of bits will zero fill.if == 1 { = 1<< - 1 = } else { = } } else { := := if + <= {// Double pattern until the whole uintptr is filled.for <= goarch.PtrSize*8 { |= << += }// Trim away incomplete copy of original pattern in high bits. // TODO(rsc): Replace with table lookup or loop on systems without divide? = / * &= 1<< - 1 = = } }// Add pattern to bit buffer and flush bit buffer, c/npattern times. // Since pattern contains >8 bits, there will be full bytes to flush // on each iteration.for ; >= ; -= { |= << += for >= 8 { * = uint8() = add1() >>= 8 -= 8 } }// Add final fragment to bit buffer.if > 0 { &= 1<< - 1 |= << += }continue }// Repeat; n too large to fit in a register. // Since nbits <= 7, we know the first few bytes of repeated data // are already written to memory. := - // n > nbits because n > maxBits and nbits <= 7// Leading src fragment. = subtractb(, (+7)/8)if := & 7; != 0 { |= uintptr(*) >> (8 - ) << = add1() += -= }// Main loop: load one byte, write another. // The bits are rotating through the bit buffer.for := / 8; > 0; -- { |= uintptr(*) << = add1() * = uint8() = add1() >>= 8 }// Final src fragment.if %= 8; > 0 { |= (uintptr(*) & (1<< - 1)) << += } }// Write any final bits out, using full-byte writes, even for the final byte. := (uintptr(unsafe.Pointer())-uintptr(unsafe.Pointer()))*8 + += - & 7for ; > 0; -= 8 { * = uint8() = add1() >>= 8 }return}// materializeGCProg allocates space for the (1-bit) pointer bitmask// for an object of size ptrdata. Then it fills that space with the// pointer bitmask specified by the program prog.// The bitmask starts at s.startAddr.// The result must be deallocated with dematerializeGCProg.func materializeGCProg( uintptr, *byte) *mspan {// Each word of ptrdata needs one bit in the bitmap. := divRoundUp(, 8*goarch.PtrSize)// Compute the number of pages needed for bitmapBytes. := divRoundUp(, pageSize) := mheap_.allocManual(, spanAllocPtrScalarBits)runGCProg(addb(, 4), (*byte)(unsafe.Pointer(.startAddr)))return}func dematerializeGCProg( *mspan) {mheap_.freeManual(, spanAllocPtrScalarBits)}func dumpGCProg( *byte) { := 0for { := * = add1()if == 0 {print("\t", , " end\n")break }if &0x80 == 0 {print("\t", , " lit ", , ":") := int(+7) / 8for := 0; < ; ++ {print(" ", hex(*)) = add1() }print("\n") += int() } else { := int( &^ 0x80)if == 0 {for := uint(0); ; += 7 { := * = add1() |= int(&0x7f) << if &0x80 == 0 {break } } } := 0for := uint(0); ; += 7 { := * = add1() |= int(&0x7f) << if &0x80 == 0 {break } }print("\t", , " repeat ", , " × ", , "\n") += * } }}// Testing.// reflect_gcbits returns the GC type info for x, for testing.// The result is the bitmap entries (0 or 1), one entry per byte.////go:linkname reflect_gcbits reflect.gcbitsfunc reflect_gcbits( any) []byte {returngetgcmask()}// Returns GC type info for the pointer stored in ep for testing.// If ep points to the stack, only static live information will be returned// (i.e. not for objects which are only dynamically live stack objects).func getgcmask( any) ( []byte) { := *efaceOf(&) := .data := ._typevar *_typeif .Kind_&abi.KindMask != abi.Pointer {throw("bad argument to getgcmask: expected type to be a pointer to the value type whose mask is being queried") } = (*ptrtype)(unsafe.Pointer()).Elem// data or bssfor , := rangeactiveModules() {// dataif .data <= uintptr() && uintptr() < .edata { := .gcdatamask.bytedata := .Size_ = make([]byte, /goarch.PtrSize)for := uintptr(0); < ; += goarch.PtrSize { := (uintptr() + - .data) / goarch.PtrSize [/goarch.PtrSize] = (*addb(, /8) >> ( % 8)) & 1 }return }// bssif .bss <= uintptr() && uintptr() < .ebss { := .gcbssmask.bytedata := .Size_ = make([]byte, /goarch.PtrSize)for := uintptr(0); < ; += goarch.PtrSize { := (uintptr() + - .bss) / goarch.PtrSize [/goarch.PtrSize] = (*addb(, /8) >> ( % 8)) & 1 }return } }// heapif , , := findObject(uintptr(), 0, 0); != 0 {if .spanclass.noscan() {returnnil } := + .elemsize// Move the base up to the iterator's start, because // we want to hide evidence of a malloc header from the // caller. := .typePointersOfUnchecked() = .addr// Unroll the full bitmap the GC would actually observe. := make([]byte, (-)/goarch.PtrSize)for {varuintptrif , = .next(); == 0 {break } [(-)/goarch.PtrSize] = 1 }// Double-check that every part of the ptr/scalar we're not // showing the caller is zeroed. This keeps us honest that // that information is actually irrelevant.for := ; < .elemsize; ++ {if *(*byte)(unsafe.Pointer()) != 0 {throw("found non-zeroed tail of allocation") } }// Callers (and a check we're about to run) expects this mask // to end at the last pointer.forlen() > 0 && [len()-1] == 0 { = [:len()-1] }if .Kind_&abi.KindGCProg == 0 {// Unroll again, but this time from the type information. := make([]byte, (-)/goarch.PtrSize) = .typePointersOfType(, )for {varuintptrif , = .next(); == 0 {break } [(-)/goarch.PtrSize] = 1 }// Validate that the prefix of maskFromType is equal to // maskFromHeap. maskFromType may contain more pointers than // maskFromHeap produces because maskFromHeap may be able to // get exact type information for certain classes of objects. // With maskFromType, we're always just tiling the type bitmap // through to the elemsize. // // It's OK if maskFromType has pointers in elemsize that extend // past the actual populated space; we checked above that all // that space is zeroed, so just the GC will just see nil pointers. := falsefor := range {if [] != [] { = truebreak } }if {print("runtime: heap mask=")for , := range {print() }println()print("runtime: type mask=")for , := range {print() }println()print("runtime: type=", toRType().string(), "\n")throw("found two different masks from two different methods") } }// Select the heap mask to return. We may not have a type mask. = // Make sure we keep ep alive. We may have stopped referencing // ep's data pointer sometime before this point and it's possible // for that memory to get freed.KeepAlive()return }// stackif := getg(); .m.curg.stack.lo <= uintptr() && uintptr() < .m.curg.stack.hi { := falsevarunwinderfor .initAt(.m.curg.sched.pc, .m.curg.sched.sp, 0, .m.curg, 0); .valid(); .next() {if .frame.sp <= uintptr() && uintptr() < .frame.varp { = truebreak } }if { , , := .frame.getStackMap(false)if .n == 0 {return } := uintptr(.n) * goarch.PtrSize := (*ptrtype)(unsafe.Pointer()).Elem.Size_ = make([]byte, /goarch.PtrSize)for := uintptr(0); < ; += goarch.PtrSize { := (uintptr() + - .frame.varp + ) / goarch.PtrSize [/goarch.PtrSize] = .ptrbit() } }return }// otherwise, not something the GC knows about. // possibly read-only data, like malloc(0). // must not have pointersreturn}
The pages are generated with Goldsv0.6.9-preview. (GOOS=linux GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.