package runtime
Import Path
runtime (on go.dev)
Dependency Relation
imports 13 packages, and imported by 28 packages
Involved Source Files
alg.go
arena.go
asan0.go
atomic_pointer.go
cgo.go
cgo_mmap.go
cgo_sigaction.go
cgocall.go
cgocallback.go
cgocheck.go
chan.go
checkptr.go
compiler.go
complex.go
covercounter.go
covermeta.go
cpuflags.go
cpuflags_amd64.go
cpuprof.go
cputicks.go
create_file_unix.go
debug.go
debugcall.go
debuglog.go
debuglog_off.go
defs_linux_amd64.go
env_posix.go
error.go
exithook.go
Package runtime contains operations that interact with Go's runtime system,
such as functions to control goroutines. It also includes the low-level type information
used by the reflect package; see reflect's documentation for the programmable
interface to the run-time type system.
# Environment Variables
The following environment variables ($name or %name%, depending on the host
operating system) control the run-time behavior of Go programs. The meanings
and use may change from release to release.
The GOGC variable sets the initial garbage collection target percentage.
A collection is triggered when the ratio of freshly allocated data to live data
remaining after the previous collection reaches this percentage. The default
is GOGC=100. Setting GOGC=off disables the garbage collector entirely.
[runtime/debug.SetGCPercent] allows changing this percentage at run time.
The GOMEMLIMIT variable sets a soft memory limit for the runtime. This memory limit
includes the Go heap and all other memory managed by the runtime, and excludes
external memory sources such as mappings of the binary itself, memory managed in
other languages, and memory held by the operating system on behalf of the Go
program. GOMEMLIMIT is a numeric value in bytes with an optional unit suffix.
The supported suffixes include B, KiB, MiB, GiB, and TiB. These suffixes
represent quantities of bytes as defined by the IEC 80000-13 standard. That is,
they are based on powers of two: KiB means 2^10 bytes, MiB means 2^20 bytes,
and so on. The default setting is math.MaxInt64, which effectively disables the
memory limit. [runtime/debug.SetMemoryLimit] allows changing this limit at run
time.
The GODEBUG variable controls debugging variables within the runtime.
It is a comma-separated list of name=val pairs setting these named variables:
allocfreetrace: setting allocfreetrace=1 causes every allocation to be
profiled and a stack trace printed on each object's allocation and free.
clobberfree: setting clobberfree=1 causes the garbage collector to
clobber the memory content of an object with bad content when it frees
the object.
cpu.*: cpu.all=off disables the use of all optional instruction set extensions.
cpu.extension=off disables use of instructions from the specified instruction set extension.
extension is the lower case name for the instruction set extension such as sse41 or avx
as listed in internal/cpu package. As an example cpu.avx=off disables runtime detection
and thereby use of AVX instructions.
cgocheck: setting cgocheck=0 disables all checks for packages
using cgo to incorrectly pass Go pointers to non-Go code.
Setting cgocheck=1 (the default) enables relatively cheap
checks that may miss some errors. A more complete, but slow,
cgocheck mode can be enabled using GOEXPERIMENT (which
requires a rebuild), see https://pkg.go.dev/internal/goexperiment for details.
dontfreezetheworld: by default, the start of a fatal panic or throw
"freezes the world", preempting all threads to stop all running
goroutines, which makes it possible to traceback all goroutines, and
keeps their state close to the point of panic. Setting
dontfreezetheworld=1 disables this preemption, allowing goroutines to
continue executing during panic processing. Note that goroutines that
naturally enter the scheduler will still stop. This can be useful when
debugging the runtime scheduler, as freezetheworld perturbs scheduler
state and thus may hide problems.
efence: setting efence=1 causes the allocator to run in a mode
where each object is allocated on a unique page and addresses are
never recycled.
gccheckmark: setting gccheckmark=1 enables verification of the
garbage collector's concurrent mark phase by performing a
second mark pass while the world is stopped. If the second
pass finds a reachable object that was not found by concurrent
mark, the garbage collector will panic.
gcpacertrace: setting gcpacertrace=1 causes the garbage collector to
print information about the internal state of the concurrent pacer.
gcshrinkstackoff: setting gcshrinkstackoff=1 disables moving goroutines
onto smaller stacks. In this mode, a goroutine's stack can only grow.
gcstoptheworld: setting gcstoptheworld=1 disables concurrent garbage collection,
making every garbage collection a stop-the-world event. Setting gcstoptheworld=2
also disables concurrent sweeping after the garbage collection finishes.
gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. The format of this line is subject to change. Included in
the explanation below is also the relevant runtime/metrics metric for each field.
Currently, it is:
gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # MB stacks, #MB globals, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
@#s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap, or /gc/scan/heap:bytes
# MB goal goal heap size, or /gc/heap/goal:bytes
# MB stacks estimated scannable stack size, or /gc/scan/stack:bytes
# MB globals scannable global size, or /gc/scan/globals:bytes
# P number of processors used, or /sched/gomaxprocs:threads
The phases are stop-the-world (STW) sweep termination, concurrent
mark and scan, and STW mark termination. The CPU times
for mark/scan are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call.
harddecommit: setting harddecommit=1 causes memory that is returned to the OS to
also have protections removed on it. This is the only mode of operation on Windows,
but is helpful in debugging scavenger-related issues on other platforms. Currently,
only supported on Linux.
inittrace: setting inittrace=1 causes the runtime to emit a single line to standard
error for each package with init work, summarizing the execution time and memory
allocation. No information is printed for inits executed as part of plugin loading
and for packages without both user defined and compiler generated init work.
The format of this line is subject to change. Currently, it is:
init # @#ms, # ms clock, # bytes, # allocs
where the fields are as follows:
init # the package name
@# ms time in milliseconds when the init started since program start
# clock wall-clock time for package initialization work
# bytes memory allocated on the heap
# allocs number of heap allocations
madvdontneed: setting madvdontneed=0 will use MADV_FREE
instead of MADV_DONTNEED on Linux when returning memory to the
kernel. This is more efficient, but means RSS numbers will
drop only when the OS is under memory pressure. On the BSDs and
Illumos/Solaris, setting madvdontneed=1 will use MADV_DONTNEED instead
of MADV_FREE. This is less efficient, but causes RSS numbers to drop
more quickly.
memprofilerate: setting memprofilerate=X will update the value of runtime.MemProfileRate.
When set to 0 memory profiling is disabled. Refer to the description of
MemProfileRate for the default value.
pagetrace: setting pagetrace=/path/to/file will write out a trace of page events
that can be viewed, analyzed, and visualized using the x/debug/cmd/pagetrace tool.
Build your program with GOEXPERIMENT=pagetrace to enable this functionality. Do not
enable this functionality if your program is a setuid binary as it introduces a security
risk in that scenario. Currently not supported on Windows, plan9 or js/wasm. Setting this
option for some applications can produce large traces, so use with care.
invalidptr: invalidptr=1 (the default) causes the garbage collector and stack
copier to crash the program if an invalid pointer value (for example, 1)
is found in a pointer-typed location. Setting invalidptr=0 disables this check.
This should only be used as a temporary workaround to diagnose buggy code.
The real fix is to not store integers in pointer-typed locations.
sbrk: setting sbrk=1 replaces the memory allocator and garbage collector
with a trivial allocator that obtains memory from the operating system and
never reclaims any memory.
scavtrace: setting scavtrace=1 causes the runtime to emit a single line to standard
error, roughly once per GC cycle, summarizing the amount of work done by the
scavenger as well as the total amount of memory returned to the operating system
and an estimate of physical memory utilization. The format of this line is subject
to change, but currently it is:
scav # KiB work (bg), # KiB work (eager), # KiB total, #% util
where the fields are as follows:
# KiB work (bg) the amount of memory returned to the OS in the background since
the last line
# KiB work (eager) the amount of memory returned to the OS eagerly since the last line
# KiB now the amount of address space currently returned to the OS
#% util the fraction of all unscavenged heap memory which is in-use
If the line ends with "(forced)", then scavenging was forced by a
debug.FreeOSMemory() call.
scheddetail: setting schedtrace=X and scheddetail=1 causes the scheduler to emit
detailed multiline info every X milliseconds, describing state of the scheduler,
processors, threads and goroutines.
schedtrace: setting schedtrace=X causes the scheduler to emit a single line to standard
error every X milliseconds, summarizing the scheduler state.
tracebackancestors: setting tracebackancestors=N extends tracebacks with the stacks at
which goroutines were created, where N limits the number of ancestor goroutines to
report. This also extends the information returned by runtime.Stack. Ancestor's goroutine
IDs will refer to the ID of the goroutine at the time of creation; it's possible for this
ID to be reused for another goroutine. Setting N to 0 will report no ancestry information.
tracefpunwindoff: setting tracefpunwindoff=1 forces the execution tracer to
use the runtime's default stack unwinder instead of frame pointer unwinding.
This increases tracer overhead, but could be helpful as a workaround or for
debugging unexpected regressions caused by frame pointer unwinding.
asyncpreemptoff: asyncpreemptoff=1 disables signal-based
asynchronous goroutine preemption. This makes some loops
non-preemptible for long periods, which may delay GC and
goroutine scheduling. This is useful for debugging GC issues
because it also disables the conservative stack scanning used
for asynchronously preempted goroutines.
The net and net/http packages also refer to debugging variables in GODEBUG.
See the documentation for those packages for details.
The GOMAXPROCS variable limits the number of operating system threads that
can execute user-level Go code simultaneously. There is no limit to the number of threads
that can be blocked in system calls on behalf of Go code; those do not count against
the GOMAXPROCS limit. This package's GOMAXPROCS function queries and changes
the limit.
The GORACE variable configures the race detector, for programs built using -race.
See https://golang.org/doc/articles/race_detector.html for details.
The GOTRACEBACK variable controls the amount of output generated when a Go
program fails due to an unrecovered panic or an unexpected runtime condition.
By default, a failure prints a stack trace for the current goroutine,
eliding functions internal to the run-time system, and then exits with exit code 2.
The failure prints stack traces for all goroutines if there is no current goroutine
or the failure is internal to the run-time.
GOTRACEBACK=none omits the goroutine stack traces entirely.
GOTRACEBACK=single (the default) behaves as described above.
GOTRACEBACK=all adds stack traces for all user-created goroutines.
GOTRACEBACK=system is like “all” but adds stack frames for run-time functions
and shows goroutines created internally by the run-time.
GOTRACEBACK=crash is like “system” but crashes in an operating system-specific
manner instead of exiting. For example, on Unix systems, the crash raises
SIGABRT to trigger a core dump.
GOTRACEBACK=wer is like “crash” but doesn't disable Windows Error Reporting (WER).
For historical reasons, the GOTRACEBACK settings 0, 1, and 2 are synonyms for
none, all, and system, respectively.
The runtime/debug package's SetTraceback function allows increasing the
amount of output at run time, but it cannot reduce the amount below that
specified by the environment variable.
See https://golang.org/pkg/runtime/debug/#SetTraceback.
The GOARCH, GOOS, GOPATH, and GOROOT environment variables complete
the set of Go environment variables. They influence the building of Go programs
(see https://golang.org/cmd/go and https://golang.org/pkg/go/build).
GOARCH, GOOS, and GOROOT are recorded at compile time and made available by
constants or functions in this package, but they do not influence the execution
of the run-time system.
# Security
On Unix platforms, Go's runtime system behaves slightly differently when a
binary is setuid/setgid or executed with setuid/setgid-like properties, in order
to prevent dangerous behaviors. On Linux this is determined by checking for the
AT_SECURE flag in the auxiliary vector, on the BSDs and Solaris/Illumos it is
determined by checking the issetugid syscall, and on AIX it is determined by
checking if the uid/gid match the effective uid/gid.
When the runtime determines the binary is setuid/setgid-like, it does three main
things:
- The standard input/output file descriptors (0, 1, 2) are checked to be open.
If any of them are closed, they are opened pointing at /dev/null.
- The value of the GOTRACEBACK environment variable is set to 'none'.
- When a signal is received that terminates the program, or the program
encounters an unrecoverable panic that would otherwise override the value
of GOTRACEBACK, the goroutine stack, registers, and other memory related
information are omitted.
fastlog2.go
fastlog2table.go
float.go
hash64.go
heapdump.go
histogram.go
iface.go
lfstack.go
lock_futex.go
lockrank.go
lockrank_off.go
malloc.go
map.go
map_fast32.go
map_fast64.go
map_faststr.go
mbarrier.go
mbitmap.go
mcache.go
mcentral.go
mcheckmark.go
mem.go
mem_linux.go
metrics.go
mfinal.go
mfixalloc.go
mgc.go
mgclimit.go
mgcmark.go
mgcpacer.go
mgcscavenge.go
mgcstack.go
mgcsweep.go
mgcwork.go
mheap.go
minmax.go
mpagealloc.go
mpagealloc_64bit.go
mpagecache.go
mpallocbits.go
mprof.go
mranges.go
msan0.go
msize.go
mspanset.go
mstats.go
mwbbuf.go
nbpipe_pipe2.go
netpoll.go
netpoll_epoll.go
nonwindows_stub.go
os_linux.go
os_linux_generic.go
os_linux_noauxv.go
os_linux_x86.go
os_nonopenbsd.go
os_unix.go
pagetrace_off.go
panic.go
pinner.go
plugin.go
preempt.go
preempt_nonwindows.go
print.go
proc.go
profbuf.go
proflabel.go
race0.go
rdebug.go
retry.go
runtime.go
runtime1.go
runtime2.go
runtime_boring.go
rwmutex.go
security_linux.go
security_unix.go
select.go
sema.go
signal_amd64.go
signal_linux_amd64.go
signal_unix.go
sigqueue.go
sigqueue_note.go
sigtab_linux_generic.go
sizeclasses.go
slice.go
softfloat64.go
stack.go
stkframe.go
string.go
stubs.go
stubs2.go
stubs3.go
stubs_amd64.go
stubs_linux.go
symtab.go
symtabinl.go
sys_nonppc64x.go
sys_x86.go
tagptr.go
tagptr_64bit.go
test_amd64.go
time.go
time_nofake.go
timeasm.go
tls_stub.go
trace.go
traceback.go
type.go
typekind.go
unsafe.go
utf8.go
vdso_elf64.go
vdso_linux.go
vdso_linux_amd64.go
write_err.go
asm_amd64.h
asm_ppc64x.h
funcdata.h
go_tls.h
textflag.h
asm.s
asm_amd64.s
duff_amd64.s
memclr_amd64.s
memmove_amd64.s
preempt_amd64.s
rt0_linux_amd64.s
sys_linux_amd64.s
test_amd64.s
time_linux_amd64.s
Code Examples
package main
import (
"fmt"
"runtime"
"strings"
)
func main() {
c := func() {
// Ask runtime.Callers for up to 10 PCs, including runtime.Callers itself.
pc := make([]uintptr, 10)
n := runtime.Callers(0, pc)
if n == 0 {
// No PCs available. This can happen if the first argument to
// runtime.Callers is large.
//
// Return now to avoid processing the zero Frame that would
// otherwise be returned by frames.Next below.
return
}
pc = pc[:n] // pass only valid pcs to runtime.CallersFrames
frames := runtime.CallersFrames(pc)
// Loop to get frames.
// A fixed number of PCs can expand to an indefinite number of Frames.
for {
frame, more := frames.Next()
// Process this frame.
//
// To keep this example's output stable
// even if there are changes in the testing package,
// stop unwinding when we leave package runtime.
if !strings.Contains(frame.File, "runtime/") {
break
}
fmt.Printf("- more:%v | %s\n", more, frame.Function)
// Check whether there are more frames to process after this one.
if !more {
break
}
}
}
b := func() { c() }
a := func() { b() }
a()
}
Package-Level Type Names (total 328, in which 11 are exported)
/* sort exporteds by: | */
BlockProfileRecord describes blocking events originated
at a particular call sequence (stack trace).
Count int64
Cycles int64
StackRecord StackRecord
// stack trace for this record; ends at first 0 entry
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func BlockProfile(p []BlockProfileRecord) (n int, ok bool)
func MutexProfile(p []BlockProfileRecord) (n int, ok bool)
The Error interface identifies a run time error.
( Error) Error() builtin.string
RuntimeError is a no-op function but
serves to distinguish types that are run time
errors from ordinary errors: a type is a
run time error if it has a RuntimeError method.
*PanicNilError
*TypeAssertionError
boundsError
errorAddressString
errorString
plainError
Error : error
Frame is the information returned by Frames for each call frame.
Entry point program counter for the function; may be zero
if not known. If Func is not nil then Entry ==
Func.Entry().
File and Line are the file name and line number of the
location in this frame. For non-leaf frames, this will be
the location of a call. These may be the empty string and
zero, respectively, if not known.
Func is the Func value of this call frame. This may be nil
for non-Go code or fully inlined functions.
Function is the package path-qualified function name of
this call frame. If non-empty, this string uniquely
identifies a single function in the program.
This may be the empty string if not known.
If Func is not nil then Function == Func.Name().
Line int
PC is the program counter for the location in this frame.
For a frame that calls another frame, this will be the
program counter of a call instruction. Because of inlining,
multiple frames may have the same PC value, but different
symbolic information.
The runtime's internal view of the function. This field
is set (funcInfo.valid() returns true) only for Go functions,
not for C functions.
startLine is the line number of the beginning of the function in
this frame. Specifically, it is the line number of the func keyword
for Go functions. Note that //line directives can change the
filename and/or line number arbitrarily within a function, meaning
that the Line - startLine offset is not always meaningful.
This may be zero if not known.
func (*Frames).Next() (frame Frame, more bool)
func go.uber.org/zap/internal/stacktrace.(*Stack).Next() (_ Frame, more bool)
func expandCgoFrames(pc uintptr) []Frame
func net/http.relevantCaller() Frame
func go.uber.org/zap/internal/stacktrace.(*Formatter).FormatFrame(frame Frame)
func runtime_FrameStartLine(f *Frame) int
func runtime_FrameSymbolName(f *Frame) string
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
Frames may be used to get function/file/line information for a
slice of PC values returned by Callers.
callers is a slice of PCs that have not yet been expanded to frames.
frameStore [2]Frame
frames is a slice of Frames that have yet to be returned.
Next returns a Frame representing the next call frame in the slice
of PC values. If it has already returned all call frames, Next
returns a zero Frame.
The more result indicates whether the next call to Next will return
a valid Frame. It does not necessarily indicate whether this call
returned one.
See the Frames example for idiomatic usage.
func CallersFrames(callers []uintptr) *Frames
A Func represents a Go function in the running binary.
// unexported field to disallow conversions
Entry returns the entry address of the function.
FileLine returns the file name and line number of the
source code corresponding to the program counter pc.
The result will not be accurate if pc is not a program
counter within f.
Name returns the name of the function.
(*Func) funcInfo() funcInfo
(*Func) raw() *_func
startLine returns the starting line number of the function. i.e., the line
number of the func keyword.
func FuncForPC(pc uintptr) *Func
A MemProfileRecord describes the live objects allocated
by a particular call sequence (stack trace).
// number of bytes allocated, freed
// number of objects allocated, freed
// number of bytes allocated, freed
// number of objects allocated, freed
// stack trace for this record; ends at first 0 entry
InUseBytes returns the number of bytes in use (AllocBytes - FreeBytes).
InUseObjects returns the number of objects in use (AllocObjects - FreeObjects).
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func MemProfile(p []MemProfileRecord, inuseZero bool) (n int, ok bool)
func record(r *MemProfileRecord, b *bucket)
A MemStats records statistics about the memory allocator.
Alloc is bytes of allocated heap objects.
This is the same as HeapAlloc (see below).
BuckHashSys is bytes of memory in profiling bucket hash tables.
BySize reports per-size class allocation statistics.
BySize[N] gives statistics for allocations of size S where
BySize[N-1].Size < S ≤ BySize[N].Size.
This does not report allocations larger than BySize[60].Size.
DebugGC is currently unused.
EnableGC indicates that GC is enabled. It is always true,
even if GOGC=off.
Frees is the cumulative count of heap objects freed.
GCCPUFraction is the fraction of this program's available
CPU time used by the GC since the program started.
GCCPUFraction is expressed as a number between 0 and 1,
where 0 means GC has consumed none of this program's CPU. A
program's available CPU time is defined as the integral of
GOMAXPROCS since the program started. That is, if
GOMAXPROCS is 2 and a program has been running for 10
seconds, its "available CPU" is 20 seconds. GCCPUFraction
does not include CPU time used for write barrier activity.
This is the same as the fraction of CPU reported by
GODEBUG=gctrace=1.
GCSys is bytes of memory in garbage collection metadata.
HeapAlloc is bytes of allocated heap objects.
"Allocated" heap objects include all reachable objects, as
well as unreachable objects that the garbage collector has
not yet freed. Specifically, HeapAlloc increases as heap
objects are allocated and decreases as the heap is swept
and unreachable objects are freed. Sweeping occurs
incrementally between GC cycles, so these two processes
occur simultaneously, and as a result HeapAlloc tends to
change smoothly (in contrast with the sawtooth that is
typical of stop-the-world garbage collectors).
HeapIdle is bytes in idle (unused) spans.
Idle spans have no objects in them. These spans could be
(and may already have been) returned to the OS, or they can
be reused for heap allocations, or they can be reused as
stack memory.
HeapIdle minus HeapReleased estimates the amount of memory
that could be returned to the OS, but is being retained by
the runtime so it can grow the heap without requesting more
memory from the OS. If this difference is significantly
larger than the heap size, it indicates there was a recent
transient spike in live heap size.
HeapInuse is bytes in in-use spans.
In-use spans have at least one object in them. These spans
can only be used for other objects of roughly the same
size.
HeapInuse minus HeapAlloc estimates the amount of memory
that has been dedicated to particular size classes, but is
not currently being used. This is an upper bound on
fragmentation, but in general this memory can be reused
efficiently.
HeapObjects is the number of allocated heap objects.
Like HeapAlloc, this increases as objects are allocated and
decreases as the heap is swept and unreachable objects are
freed.
HeapReleased is bytes of physical memory returned to the OS.
This counts heap memory from idle spans that was returned
to the OS and has not yet been reacquired for the heap.
HeapSys is bytes of heap memory obtained from the OS.
HeapSys measures the amount of virtual address space
reserved for the heap. This includes virtual address space
that has been reserved but not yet used, which consumes no
physical memory, but tends to be small, as well as virtual
address space for which the physical memory has been
returned to the OS after it became unused (see HeapReleased
for a measure of the latter).
HeapSys estimates the largest size the heap has had.
LastGC is the time the last garbage collection finished, as
nanoseconds since 1970 (the UNIX epoch).
Lookups is the number of pointer lookups performed by the
runtime.
This is primarily useful for debugging runtime internals.
MCacheInuse is bytes of allocated mcache structures.
MCacheSys is bytes of memory obtained from the OS for
mcache structures.
MSpanInuse is bytes of allocated mspan structures.
MSpanSys is bytes of memory obtained from the OS for mspan
structures.
Mallocs is the cumulative count of heap objects allocated.
The number of live objects is Mallocs - Frees.
NextGC is the target heap size of the next GC cycle.
The garbage collector's goal is to keep HeapAlloc ≤ NextGC.
At the end of each GC cycle, the target for the next cycle
is computed based on the amount of reachable data and the
value of GOGC.
NumForcedGC is the number of GC cycles that were forced by
the application calling the GC function.
NumGC is the number of completed GC cycles.
OtherSys is bytes of memory in miscellaneous off-heap
runtime allocations.
PauseEnd is a circular buffer of recent GC pause end times,
as nanoseconds since 1970 (the UNIX epoch).
This buffer is filled the same way as PauseNs. There may be
multiple pauses per GC cycle; this records the end of the
last pause in a cycle.
PauseNs is a circular buffer of recent GC stop-the-world
pause times in nanoseconds.
The most recent pause is at PauseNs[(NumGC+255)%256]. In
general, PauseNs[N%256] records the time paused in the most
recent N%256th GC cycle. There may be multiple pauses per
GC cycle; this is the sum of all pauses during a cycle.
PauseTotalNs is the cumulative nanoseconds in GC
stop-the-world pauses since the program started.
During a stop-the-world pause, all goroutines are paused
and only the garbage collector can run.
StackInuse is bytes in stack spans.
In-use stack spans have at least one stack in them. These
spans can only be used for other stacks of the same size.
There is no StackIdle because unused stack spans are
returned to the heap (and hence counted toward HeapIdle).
StackSys is bytes of stack memory obtained from the OS.
StackSys is StackInuse, plus any memory obtained directly
from the OS for OS thread stacks.
In non-cgo programs this metric is currently equal to StackInuse
(but this should not be relied upon, and the value may change in
the future).
In cgo programs this metric includes OS thread stacks allocated
directly from the OS. Currently, this only accounts for one stack in
c-shared and c-archive build modes and other sources of stacks from
the OS (notably, any allocated by C code) are not currently measured.
Note this too may change in the future.
Sys is the total bytes of memory obtained from the OS.
Sys is the sum of the XSys fields below. Sys measures the
virtual address space reserved by the Go runtime for the
heap, stacks, and other internal data structures. It's
likely that not all of the virtual address space is backed
by physical memory at any given moment, though in general
it all was at some point.
TotalAlloc is cumulative bytes allocated for heap objects.
TotalAlloc increases as heap objects are allocated, but
unlike Alloc and HeapAlloc, it does not decrease when
objects are freed.
func ReadMemStats(m *MemStats)
func dumpmemstats(m *MemStats)
func mdump(m *MemStats)
func readmemstats_m(stats *MemStats)
func writeheapdump_m(fd uintptr, m *MemStats)
A PanicNilError happens when code calls panic(nil).
Before Go 1.21, programs that called panic(nil) observed recover returning nil.
Starting in Go 1.21, programs that call panic(nil) observe recover returning a *PanicNilError.
Programs can change back to the old behavior by setting GODEBUG=panicnil=1.
(*PanicNilError) Error() string
(*PanicNilError) RuntimeError()
*PanicNilError : Error
*PanicNilError : error
A Pinner is a set of pinned Go objects. An object can be pinned with
the Pin method and all pinned objects of a Pinner can be unpinned with the
Unpin method.
pinner *pinner
pinner.refStore [5]unsafe.Pointer
pinner.refs []unsafe.Pointer
Pin pins a Go object, preventing it from being moved or freed by the garbage
collector until the Unpin method has been called.
A pointer to a pinned
object can be directly stored in C memory or can be contained in Go memory
passed to C functions. If the pinned object itself contains pointers to Go
objects, these objects must be pinned separately if they are going to be
accessed from C code.
The argument must be a pointer of any type or an
unsafe.Pointer. It must be the result of calling new,
taking the address of a composite literal, or taking the address of a
local variable. If one of these conditions is not met, Pin will panic.
Unpin unpins all pinned objects of the Pinner.
( Pinner) unpin()
A StackRecord describes a single execution stack.
// stack trace for this record; ends at first 0 entry
Stack returns the stack trace associated with the record,
a prefix of r.Stack0.
func GoroutineProfile(p []StackRecord) (n int, ok bool)
func ThreadCreateProfile(p []StackRecord) (n int, ok bool)
func goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func goroutineProfileWithLabelsConcurrent(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func goroutineProfileWithLabelsSync(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func runtime_goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
func saveg(pc, sp uintptr, gp *g, r *StackRecord)
A _defer holds an entry on the list of deferred calls.
If you add a field here, add code to clear it in deferProcStack.
This struct must match the code in cmd/compile/internal/ssagen/ssa.go:deferstruct
and cmd/compile/internal/ssagen/ssa.go:(*state).call.
Some defers will be allocated on the stack and some on the heap.
All defers are logically part of the stack, so write barriers to
initialize them are not required. All defers must be manually scanned,
and for heap defers, marked.
// panic that is running defer
If openDefer is true, the fields below record values about the stack
frame and associated function that has the open-coded defer(s). sp
above will be the sp for the frame, and pc will be address of the
deferreturn call in the function.
// funcdata for the function associated with the frame
// can be nil for open-coded defers
framepc is the current pc associated with the stack frame. Together,
with sp above (which is the sp associated with the stack frame),
framepc/sp can be used as pc/sp pair to continue a stack trace via
gentraceback().
heap bool
// next defer on G; can point to either heap or stack!
openDefer indicates that this _defer is for a frame with open-coded
defers. We have only one defer record for the entire frame (which may
currently have 0, 1, or more defers active).
// pc at time of defer
// sp at time of defer
started bool
// value of varp for the stack frame
func newdefer() *_defer
func deferprocStack(d *_defer)
func freedefer(d *_defer)
func runOpenDeferFrame(d *_defer) bool
Layout of in-memory per-function information prepared by linker
See https://golang.org/s/go12symtab.
Keep in sync with linker (../cmd/link/internal/ld/pcln.go:/pclntab)
and with package debug/gosym and with symtab.go in package runtime.
// Only in static data
// in/out args size
// runtime.cutab offset of this function's CU
// offset of start of a deferreturn call instruction from entry, if any.
// start pc, as offset from moduledata.text/pcHeader.textStart
flag abi.FuncFlag
// set for certain special runtime functions
// function name, as index into moduledata.funcnametab.
// must be last, must end on a uint32-aligned boundary
npcdata uint32
pcfile uint32
pcln uint32
pcsp uint32
// line number of start of function (func keyword/TEXT directive)
(*_func) funcInfo() funcInfo
isInlined reports whether f should be re-interpreted as a *funcinl.
func (*Func).raw() *_func
A _panic holds information about an active panic.
A _panic value must only ever live on the stack.
The argp and link fields are stack pointers, but don't need special
handling during stack growth: because they are pointer-typed and
_panic values only live on the stack, regular stack pointer
adjustment takes care of them.
// the panic was aborted
// argument to panic
// pointer to arguments of deferred call run during panic; cannot move - known to liblink
goexit bool
// link to earlier panic
// where to return to in runtime if this panic is bypassed
// whether this panic is over
// where to return to in runtime if this panic is bypassed
func deferCallSave(p *_panic, fn func())
func fatalpanic(msgs *_panic)
func preprintpanics(p *_panic)
func printpanics(p *_panic)
activeSweep is a type that captures whether sweeping
is done, and whether there are any outstanding sweepers.
Every potential sweeper must call begin() before they look
for work, and end() after they've finished sweeping.
state is divided into two parts.
The top bit (masked by sweepDrainedMask) is a boolean
value indicating whether all the sweep work has been
drained from the queue.
The rest of the bits are a counter, indicating the
number of outstanding concurrent sweepers.
begin registers a new sweeper. Returns a sweepLocker
for acquiring spans for sweeping. Any outstanding sweeper blocks
sweep termination.
If the sweepLocker is invalid, the caller can be sure that all
outstanding sweep work has been drained, so there is nothing left
to sweep. Note that there may be sweepers currently running, so
this does not indicate that all sweeping has completed.
Even if the sweepLocker is invalid, its sweepGen is always valid.
end deregisters a sweeper. Must be called once for each time
begin is called if the sweepLocker is valid.
isDone returns true if all sweep work has been drained and no more
outstanding sweepers exist. That is, when the sweep phase is
completely done.
markDrained marks the active sweep cycle as having drained
all remaining work. This is safe to be called concurrently
with all other methods of activeSweep, though may race.
Returns true if this call was the one that actually performed
the mark.
reset sets up the activeSweep for the next sweep cycle.
The world must be stopped.
sweepers returns the current number of active sweepers.
addrRange represents a region of address space.
An addrRange must never span a gap in the address space.
base and limit together represent the region of address space
[base, limit). That is, base is inclusive, limit is exclusive.
These are address over an offset view of the address space on
platforms with a segmented address space, that is, on platforms
where arenaBaseOffset != 0.
base and limit together represent the region of address space
[base, limit). That is, base is inclusive, limit is exclusive.
These are address over an offset view of the address space on
platforms with a segmented address space, that is, on platforms
where arenaBaseOffset != 0.
contains returns whether or not the range contains a given address.
removeGreaterEqual removes all addresses in a greater than or equal
to addr and returns the new range.
size returns the size of the range represented in bytes.
subtract takes the addrRange toPrune and cuts out any overlap with
from, then returns the new range. subtract assumes that a and b
either don't overlap at all, only overlap on one side, or are equal.
If b is strictly contained in a, thus forcing a split, it will throw.
takeFromBack takes len bytes from the end of the address range, aligning
the limit to align after subtracting len. On success, returns the aligned
start of the region taken and true.
takeFromFront takes len bytes from the front of the address range, aligning
the base to align first. On success, returns the aligned start of the region
taken and true.
func makeAddrRange(base, limit uintptr) addrRange
addrRanges is a data structure holding a collection of ranges of
address space.
The ranges are coalesced eagerly to reduce the
number ranges it holds.
The slice backing store for this field is persistentalloc'd
and thus there is no way to free it.
addrRanges is not thread-safe.
ranges is a slice of ranges sorted by base.
sysStat is the stat to track allocations by this type
totalBytes is the total amount of address space in bytes counted by
this addrRanges.
add inserts a new address range to a.
r must not overlap with any address range in a and r.size() must be > 0.
cloneInto makes a deep clone of a's state into b, re-using
b's ranges if able.
contains returns true if a covers the address addr.
findAddrGreaterEqual returns the smallest address represented by a
that is >= addr. Thus, if the address is represented by a,
then it returns addr. The second return value indicates whether
such an address exists for addr in a. That is, if addr is larger than
any address known to a, the second return value will be false.
findSucc returns the first index in a such that addr is
less than the base of the addrRange at that index.
(*addrRanges) init(sysStat *sysMemStat)
removeGreaterEqual removes the ranges of a which are above addr, and additionally
splits any range containing addr.
removeLast removes and returns the highest-addressed contiguous range
of a, or the last nBytes of that range, whichever is smaller. If a is
empty, it returns an empty range.
cache pcvalueCache
// ptr distance from old to new stack (newbase - oldbase)
old stack
sghi is the highest sudog.elem on the stack.
func adjustctxt(gp *g, adjinfo *adjustinfo)
func adjustdefers(gp *g, adjinfo *adjustinfo)
func adjustframe(frame *stkframe, adjinfo *adjustinfo)
func adjustpanics(gp *g, adjinfo *adjustinfo)
func adjustpointer(adjinfo *adjustinfo, vpp unsafe.Pointer)
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func adjustsudogs(gp *g, adjinfo *adjustinfo)
func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
ancestorInfo records details of where a goroutine was started.
// goroutine id of this goroutine; original goroutine possibly dead
// pc of go statement that created this goroutine
// pcs from the stack of this goroutine
func saveAncestors(callergp *g) *[]ancestorInfo
func printAncestorTraceback(ancestor ancestorInfo)
arenaHint is a hint for where to grow the heap arenas. See
mheap_.arenaHints.
addr uintptr
down bool
next *arenaHint
l1 returns the "l1" portion of an arenaIdx.
Marked nosplit because it's called by spanOf and other nosplit
functions.
l2 returns the "l2" portion of an arenaIdx.
Marked nosplit because it's called by spanOf and other nosplit funcs.
functions.
func arenaIndex(p uintptr) arenaIdx
func arenaBase(i arenaIdx) uintptr
atomicHeadTailIndex is an atomically-accessed headTailIndex.
u atomic.Uint64
cas atomically compares-and-swaps a headTailIndex value.
decHead atomically decrements the head of a headTailIndex.
incHead atomically increments the head of a headTailIndex.
incTail atomically increments the tail of a headTailIndex.
load atomically reads a headTailIndex value.
reset clears the headTailIndex to (0, 0).
atomicMSpanPointer is an atomic.Pointer[mspan]. Can't use generics because it's NotInHeap.
p atomic.UnsafePointer
Load returns the *mspan.
Store stores an *mspan.
atomicOffAddr is like offAddr, but operations on it are atomic.
It also contains operations to be able to store marked addresses
to ensure that they're not overridden until they've been seen.
a contains the offset address, unlike offAddr.
Clear attempts to store minOffAddr in atomicOffAddr. It may fail
if a marked value is placed in the box in the meanwhile.
Load returns the address in the box as a virtual address. It also
returns if the value was marked or not.
StoreMarked stores addr but first converted to the offset address
space and then negated.
StoreMin stores addr if it's less than the current value in the
offset address space if the current value is not marked.
StoreUnmark attempts to unmark the value in atomicOffAddr and
replace it with newAddr. markedAddr must be a marked address
returned by Load. This function will not store newAddr if the
box no longer contains markedAddr.
atomicScavChunkData is an atomic wrapper around a scavChunkData
that stores it in its packed form.
value atomic.Uint64
load loads and unpacks a scavChunkData.
store packs and writes a new scavChunkData. store must be serialized
with other calls to store.
atomicSpanSetSpinePointer is an atomically-accessed spanSetSpinePointer.
It has the same semantics as atomic.UnsafePointer.
a atomic.UnsafePointer
Loads the spanSetSpinePointer and returns it.
It has the same semantics as atomic.UnsafePointer.
Stores the spanSetSpinePointer.
It has the same semantics as atomic.UnsafePointer.
Information from the compiler about the layout of stack frames.
Note: this type must agree with reflect.bitVector.
bytedata *uint8
// # of bits
ptrbit returns the i'th bit in bv.
ptrbit is less efficient than iterating directly over bitvector bits,
and should only be used in non-performance-critical code.
See adjustpointers for an example of a high-efficiency walk of a bitvector.
func makeheapobjbv(p uintptr, size uintptr) bitvector
func progToPointerMask(prog *byte, size uintptr) bitvector
func stackmapdata(stkmap *stackmap, n int32) bitvector
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func dumpbv(cbv *bitvector, offset uintptr)
func dumpfields(bv bitvector)
func dumpobj(obj unsafe.Pointer, size uintptr, bv bitvector)
A blockRecord is the bucket data for a bucket of type blockProfile,
which is used in blocking and mutex profiles.
count float64
cycles int64
A bucket for a Go map.
tophash generally contains the top byte of the hash value
for each key in this bucket. If tophash[0] < minTopHash,
tophash[0] is a bucket evacuation state instead.
(*bmap) keys() unsafe.Pointer
(*bmap) overflow(t *maptype) *bmap
(*bmap) setoverflow(t *maptype, ovf *bmap)
func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap)
func moveToBmap(t *maptype, h *hmap, dst *bmap, pos int, src *bmap) (*bmap, int)
func copyKeys(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func copyValues(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func evacuated(b *bmap) bool
func moveToBmap(t *maptype, h *hmap, dst *bmap, pos int, src *bmap) (*bmap, int)
func moveToBmap(t *maptype, h *hmap, dst *bmap, pos int, src *bmap) (*bmap, int)
A boundsError represents an indexing or slicing operation gone wrong.
code boundsErrorCode
Values in an index or slice expression can be signed or unsigned.
That means we'd need 65 bits to encode all possible indexes, from -2^63 to 2^64-1.
Instead, we keep track of whether x should be interpreted as signed or unsigned.
y is known to be nonnegative and to fit in an int.
x int64
y int
( boundsError) Error() string
( boundsError) RuntimeError()
boundsError : Error
boundsError : error
const boundsConvert
const boundsIndex
const boundsSlice3Acap
const boundsSlice3Alen
const boundsSlice3B
const boundsSlice3C
const boundsSliceAcap
const boundsSliceAlen
const boundsSliceB
A bucket holds per-call-stack profiling information.
The representation is a bit sleazy, inherited from C.
This struct defines the bucket header. It is followed in
memory by the stack words and then the actual record
data, either a memRecord or a blockRecord.
Per-call-stack profiling information.
Lookup by hashing call stack into a linked-list hash table.
None of the fields in this bucket header are modified after
creation, including its next and allnext links.
No heap pointers.
allnext *bucket
hash uintptr
next *bucket
nstk uintptr
size uintptr
// memBucket or blockBucket (includes mutexProfile)
bp returns the blockRecord associated with the blockProfile bucket b.
mp returns the memRecord associated with the memProfile bucket b.
stk returns the slice in b holding the stack.
func newBucket(typ bucketType, nstk int) *bucket
func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
func dumpmemprof_callback(b *bucket, nstk uintptr, pstk *uintptr, size, allocs, frees uintptr)
func mProf_Free(b *bucket, size uintptr)
func record(r *MemProfileRecord, b *bucket)
func setprofilebucket(p unsafe.Pointer, b *bucket)
func newBucket(typ bucketType, nstk int) *bucket
func saveblockevent(cycles, rate int64, skip int, which bucketType)
func stkbucket(typ bucketType, size uintptr, stk []uintptr, alloc bool) *bucket
const blockProfile
const memProfile
const mutexProfile
type buckhashArray ([...])
Addresses collected in a cgo backtrace when crashing.
Length must match arg.Max in x_cgo_callers in runtime/cgo/gcc_traceback.c.
func printCgoTraceback(callers *cgoCallers)
var sigprofCallers
cgoSymbolizerArg is the type passed to cgoSymbolizer.
data uintptr
entry uintptr
file *byte
funcName *byte
lineno uintptr
more uintptr
pc uintptr
func callCgoSymbolizer(arg *cgoSymbolizerArg)
func printOneCgoTraceback(pc uintptr, commitFrame func() (pr, stop bool), arg *cgoSymbolizerArg) bool
cgoTracebackArg is the type passed to cgoTraceback.
buf *uintptr
context uintptr
max uintptr
sigContext uintptr
A checkmarksMap stores the GC marks in "checkmarks" mode. It is a
per-arena bitmap with a bit for every word in the arena. The mark
is stored on the bit corresponding to the first word of the marked
allocation.
b [1048576]uint8
// size of args region
Information passed up from the callee frame about
the layout of the outargs region.
// where the arguments start in the frame
// if args.n >= 0, pointer map of args region
// depth in call stack (0 == most recent)
// callee sp
func dumpframe(s *stkframe, child *childInfo)
Global chunk index.
Represents an index into the leaf level of the radix tree.
Similar to arenaIndex, except instead of arenas, it divides the address
space into chunks.
l1 returns the index into the first level of (*pageAlloc).chunks.
l2 returns the index into the second level of (*pageAlloc).chunks.
func chunkIndex(p uintptr) chunkIdx
func chunkBase(ci chunkIdx) uintptr
consistentHeapStats represents a set of various memory statistics
whose updates must be viewed completely to get a consistent
state of the world.
To write updates to memory stats use the acquire and release
methods. To obtain a consistent global snapshot of these statistics,
use read.
gen represents the current index into which writers
are writing, and can take on the value of 0, 1, or 2.
noPLock is intended to provide mutual exclusion for updating
stats when no P is available. It does not block other writers
with a P, only other writers without a P and the reader. Because
stats are usually updated when a P is available, contention on
this lock should be minimal.
stats is a ring buffer of heapStatsDelta values.
Writers always atomically update the delta at index gen.
Readers operate by rotating gen (0 -> 1 -> 2 -> 0 -> ...)
and synchronizing with writers by observing each P's
statsSeq field. If the reader observes a P not writing,
it can be sure that it will pick up the new gen value the
next time it writes.
The reader then takes responsibility by clearing space
in the ring buffer for the next reader to rotate gen to
that space (i.e. it merges in values from index (gen-2) mod 3
to index (gen-1) mod 3, then clears the former).
Note that this means only one reader can be reading at a time.
There is no way for readers to synchronize.
This process is why we need a ring buffer of size 3 instead
of 2: one is for the writers, one contains the most recent
data, and the last one is clear so writers can begin writing
to it the moment gen is updated.
acquire returns a heapStatsDelta to be updated. In effect,
it acquires the shard for writing. release must be called
as soon as the relevant deltas are updated.
The returned heapStatsDelta must be updated atomically.
The caller's P must not change between acquire and
release. This also means that the caller should not
acquire a P or release its P in between. A P also must
not acquire a given consistentHeapStats if it hasn't
yet released it.
nosplit because a stack growth in this function could
lead to a stack allocation that could reenter the
function.
read takes a globally consistent snapshot of m
and puts the aggregated value in out. Even though out is a
heapStatsDelta, the resulting values should be complete and
valid statistic values.
Not safe to call concurrently. The world must be stopped
or metricsSema must be held.
release indicates that the writer is done modifying
the delta. The value returned by the corresponding
acquire must no longer be accessed or modified after
release is called.
The caller's P must not change between acquire and
release. This also means that the caller should not
acquire a P or release its P in between.
nosplit because a stack growth in this function could
lead to a stack allocation that causes another acquire
before this operation has completed.
unsafeClear clears the shard.
Unsafe because the world must be stopped and values should
be donated elsewhere before clearing.
unsafeRead aggregates the delta for this shard into out.
Unsafe because it does so without any synchronization. The
world must be stopped.
extra holds extra stacks accumulated in addNonGo
corresponding to profiling signals arriving on
non-Go-created threads. Those stacks are written
to log the next time a normal Go thread gets the
signal handler.
Assuming the stacks are 2 words each (we don't get
a full traceback from those threads), plus one word
size for framing, 100 Hz profiling would generate
300 words per second.
Hopefully a normal Go thread will get the profiling
signal at least once every few seconds.
lock mutex
// profile events written here
// count of frames lost because of being in atomic64 on mips/arm; updated racily
// count of frames lost because extra is full
numExtra int
// profiling is on
add adds the stack trace to the profile.
It is called from signal handlers and other limited environments
and cannot allocate memory or acquire locks that might be
held at the time of the signal, nor can it use substantial amounts
of stack.
addExtra adds the "extra" profiling events,
queued by addNonGo, to the profile log.
addExtra is called either from a signal handler on a Go thread
or from an ordinary goroutine; either way it can use stack
and has a g. The world may be stopped, though.
addNonGo adds the non-Go stack trace to the profile.
It is called from a non-Go thread, so we cannot use much stack at all,
nor do anything that needs a g or an m.
In particular, we can't call cpuprof.log.write.
Instead, we copy the stack into cpuprof.extra,
which will be drained the next time a Go thread
gets the signal handling event.
var cpuprof
// GC assists
// GC dedicated mark workers + pauses
// GC idle mark workers
// GC pauses (all GOMAXPROCS, even if just 1 is running)
gcTotalTime int64
// Time Ps spent in _Pidle.
// background scavenger
// scavenge assists
scavengeTotalTime int64
// GOMAXPROCS * (monotonic wall clock time elapsed)
// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
accumulate takes a cpuStats and adds in the current state of all GC CPU
counters.
gcMarkPhase indicates that we're in the mark phase and that certain counter
values should be used.
cpuStatsAggregate represents CPU stats obtained from the runtime
acquired together to avoid skew and inconsistencies.
cpuStats cpuStats
// GC assists
// GC dedicated mark workers + pauses
// GC idle mark workers
// GC pauses (all GOMAXPROCS, even if just 1 is running)
cpuStats.gcTotalTime int64
// Time Ps spent in _Pidle.
// background scavenger
// scavenge assists
cpuStats.scavengeTotalTime int64
// GOMAXPROCS * (monotonic wall clock time elapsed)
// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
accumulate takes a cpuStats and adds in the current state of all GC CPU
counters.
gcMarkPhase indicates that we're in the mark phase and that certain counter
values should be used.
compute populates the cpuStatsAggregate with values from the runtime.
// for variables that can be changed during execution
// default value (ideally zero)
name string
// for variables that can only be set at startup
begin and end are the positions in the log of the beginning
and end of the log data, modulo len(data).
data *debugLogBuf
begin and end are the positions in the log of the beginning
and end of the log data, modulo len(data).
tick and nano are the current time base at begin.
tick and nano are the current time base at begin.
(*debugLogReader) header() (end, tick, nano uint64, p int)
(*debugLogReader) peek() (tick uint64)
(*debugLogReader) printVal() bool
(*debugLogReader) readUint16LEAt(pos uint64) uint16
(*debugLogReader) readUint64LEAt(pos uint64) uint64
(*debugLogReader) skip() uint64
(*debugLogReader) uvarint() uint64
(*debugLogReader) varint() int64
A debugLogWriter is a ring buffer of binary debug log records.
A log record consists of a 2-byte framing header and a sequence of
fields. The framing header gives the size of the record as a little
endian 16-bit value. Each field starts with a byte indicating its
type, followed by type-specific data. If the size in the framing
header is 0, it's a sync record consisting of two little endian
64-bit values giving a new time base.
Because this is a ring buffer, new records will eventually
overwrite old records. Hence, it maintains a reader that consumes
the log as it gets overwritten. That reader state is where an
actual log reader would start.
buf is a scratch buffer for encoding. This is here to
reduce stack usage.
data debugLogBuf
tick and nano are the time bases from the most recently
written sync record.
r is a reader that consumes records as they get overwritten
by the writer. It also acts as the initial reader state
when printing the log.
tick and nano are the time bases from the most recently
written sync record.
write uint64
(*debugLogWriter) byte(x byte)
(*debugLogWriter) bytes(x []byte)
(*debugLogWriter) ensure(n uint64)
(*debugLogWriter) uvarint(u uint64)
(*debugLogWriter) varint(x int64)
(*debugLogWriter) writeFrameAt(pos, size uint64) bool
(*debugLogWriter) writeSync(tick, nano uint64)
(*debugLogWriter) writeUint64LE(x uint64)
A dlogger writes to the debug log.
To obtain a dlogger, call dlog(). When done with the dlogger, call
end().
allLink is the next dlogger in the allDloggers list.
owned indicates that this dlogger is owned by an M. This is
accessed atomically.
w debugLogWriter
(*dlogger) b(x bool) *dlogger
(*dlogger) end()
(*dlogger) hex(x uint64) *dlogger
(*dlogger) i(x int) *dlogger
(*dlogger) i16(x int16) *dlogger
(*dlogger) i32(x int32) *dlogger
(*dlogger) i64(x int64) *dlogger
(*dlogger) i8(x int8) *dlogger
(*dlogger) p(x any) *dlogger
(*dlogger) pc(x uintptr) *dlogger
(*dlogger) s(x string) *dlogger
(*dlogger) traceback(x []uintptr) *dlogger
(*dlogger) u(x uint) *dlogger
(*dlogger) u16(x uint16) *dlogger
(*dlogger) u32(x uint32) *dlogger
(*dlogger) u64(x uint64) *dlogger
(*dlogger) u8(x uint8) *dlogger
(*dlogger) uptr(x uintptr) *dlogger
func dlog() *dlogger
func getCachedDlogger() *dlogger
func putCachedDlogger(l *dlogger) bool
var allDloggers *dlogger
_type *_type
data unsafe.Pointer
func efaceOf(ep *any) *eface
func assertE2I2(inter *interfacetype, e eface) (r iface)
func printeface(e eface)
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
// Dynamic entry type
// Integer value
// ELF header size in bytes
// Entry point virtual address
// Processor-specific flags
// Magic number and other info
// Architecture
// Program header table entry size
// Program header table entry count
// Program header table file offset
// Section header table entry size
// Section header table entry count
// Section header table file offset
// Section header string table index
// Object file type
// Object file version
func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)
// Segment alignment
// Segment size in file
// Segment flags
// Segment size in memory
// Segment file offset
// Segment physical address
// Segment type
// Segment virtual address
// Section virtual addr at execution
// Section alignment
// Entry size if section holds table
// Section flags
// Additional section information
// Link to another section
// Section name (string tbl index)
// Section file offset
// Section size in bytes
// Section type
// Version or dependency names
// Offset in bytes to next verdaux entry
// Offset in bytes to verdaux array
// Number of associated aux entries
// Version information
// Version name hash value
// Version Index
// Offset in bytes to next verdef entry
// Version revision
// memory address where the error occurred
// error message
Addr returns the memory address where a fault occurred.
The address provided is best-effort.
The veracity of the result may depend on the platform.
Errors providing this method will only be returned as
a result of using runtime/debug.SetPanicOnFault.
( errorAddressString) Error() string
( errorAddressString) RuntimeError()
errorAddressString : Error
errorAddressString : error
An errorString represents a runtime error described by a single string.
( errorString) Error() string
( errorString) RuntimeError()
errorString : Error
errorString : error
evacDst is an evacuation destination.
// current destination bucket
// pointer to current elem storage
// key/elem index into b
// pointer to current key storage
exitHook stores a function to be run on program exit, registered
by the utility runtime.addExitHook.
// func to run
// whether to run on non-zero exit code
NOTE: Layout known to queuefinalizer.
// ptr to object (may be a heap pointer)
// type of first argument of fn
// function to call (may be a heap pointer)
// bytes of return values from fn
// type of ptr to object (may be a heap pointer)
finblock is an array of finalizers to be executed. finblocks are
arranged in a linked list for the finalizer queue.
finblock is allocated from non-GC'd memory, so any heap pointers
must be specially handled. GC currently assumes that the finalizer
queue does not grow during marking (but it can shrink).
alllink *finblock
cnt uint32
fin [101]finalizer
next *finblock
var allfin *finblock
var finc *finblock
var finq *finblock
findfuncbucket is an array of these structures.
Each bucket represents 4096 bytes of the text segment.
Each subbucket represents 256 bytes of the text segment.
To find a function given a pc, locate the bucket and subbucket for
that pc. Add together the idx and subbucket value to obtain a
function index. Then scan the functab array starting at that
index to find the target function.
This table uses 20 bytes for every 4096 bytes of code, or ~0.5% overhead.
idx uint32
subbuckets [16]byte
fixalloc is a simple free-list allocator for fixed size objects.
Malloc uses a FixAlloc wrapped around sysAlloc to manage its
mcache and mspan objects.
Memory returned by fixalloc.alloc is zeroed by default, but the
caller may take responsibility for zeroing allocations by setting
the zero flag to false. This is only safe if the memory never
contains heap pointers.
The caller is responsible for locking around FixAlloc calls.
Callers can keep state in the object but the first word is
smashed by freeing and reallocating.
Consider marking fixalloc'd types not in heap by embedding
runtime/internal/sys.NotInHeap.
arg unsafe.Pointer
// use uintptr instead of unsafe.Pointer to avoid write barriers
// called first time p is returned
// in-use bytes now
list *mlink
// size of new chunks in bytes
// bytes remaining in current chunk
size uintptr
stat *sysMemStat
// zero allocations
(*fixalloc) alloc() unsafe.Pointer
(*fixalloc) free(p unsafe.Pointer)
Initialize f to allocate objects of the given size,
using the allocator to obtain chunks of memory.
_st [8]fpxreg
_xmm [16]xmmreg
cwd uint16
fop uint16
ftw uint16
mxcr_mask uint32
mxcsr uint32
padding [24]uint32
rdp uint64
rip uint64
swd uint16
_st [8]fpxreg1
_xmm [16]xmmreg1
cwd uint16
fop uint16
ftw uint16
mxcr_mask uint32
mxcsr uint32
padding [24]uint32
rdp uint64
rip uint64
swd uint16
// Only in static data
_func *_func
// in/out args size
// runtime.cutab offset of this function's CU
// offset of start of a deferreturn call instruction from entry, if any.
// start pc, as offset from moduledata.text/pcHeader.textStart
_func.flag abi.FuncFlag
// set for certain special runtime functions
// function name, as index into moduledata.funcnametab.
// must be last, must end on a uint32-aligned boundary
_func.npcdata uint32
_func.pcfile uint32
_func.pcln uint32
_func.pcsp uint32
// line number of start of function (func keyword/TEXT directive)
datap *moduledata
( funcInfo) _Func() *Func
entry returns the entry PC for f.
( funcInfo) funcInfo() funcInfo
isInlined reports whether f should be re-interpreted as a *funcinl.
( funcInfo) srcFunc() srcFunc
( funcInfo) valid() bool
func findfunc(pc uintptr) funcInfo
func (*Func).funcInfo() funcInfo
func adjustpointers(scanp unsafe.Pointer, bv *bitvector, adjinfo *adjustinfo, f funcInfo)
func funcdata(f funcInfo, i uint8) unsafe.Pointer
func funcfile(f funcInfo, fileno int32) string
func funcline(f funcInfo, targetpc uintptr) (file string, line int32)
func funcline1(f funcInfo, targetpc uintptr, strict bool) (file string, line int32)
func funcMaxSPDelta(f funcInfo) int32
func funcname(f funcInfo) string
func funcpkgpath(f funcInfo) string
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
func newInlineUnwinder(f funcInfo, pc uintptr, cache *pcvalueCache) (inlineUnwinder, inlineFrame)
func pcdatastart(f funcInfo, table uint32) uint32
func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32
func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcdatavalue2(f funcInfo, table uint32, targetpc uintptr) (int32, uintptr)
func pcvalue(f funcInfo, off uint32, targetpc uintptr, cache *pcvalueCache, strict bool) (int32, uintptr)
func printAncestorTracebackFuncInfo(f funcInfo, pc uintptr)
func printArgs(f funcInfo, argp unsafe.Pointer, pc uintptr)
func printcreatedby1(f funcInfo, pc uintptr, goid uint64)
Pseudo-Func that is returned for PCs that occur in inlined code.
A *Func can be either a *_func or a *funcinl, and they are distinguished
by the first uintptr.
TODO(austin): Can we merge this with inlinedCall?
// entry of the real (the "outermost") frame
file string
line int32
name string
// set to ^0 to distinguish from _func
startLine int32
fn uintptr
func addfinalizer(p unsafe.Pointer, f *funcval, nret uintptr, fint *_type, ot *ptrtype) bool
func dumpfinalizer(obj unsafe.Pointer, fn *funcval, fint *_type, ot *ptrtype)
func finq_callback(fn *funcval, obj unsafe.Pointer, nret uintptr, fint *_type, ot *ptrtype)
func gostartcallfn(gobuf *gobuf, fv *funcval)
func newproc(fn *funcval)
func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g
func queuefinalizer(p unsafe.Pointer, fn *funcval, nret uintptr, fint *_type, ot *ptrtype)
// innermost defer
// innermost panic - offset known to liblink
activeStackChans indicates that there are unlocked channels
pointing into this goroutine's stack. If true, stack
copying needs to acquire channel locks to protect these
areas of the stack.
// ancestor information goroutine(s) that created this goroutine (only used if debug.tracebackancestors)
asyncSafePoint is set if g is stopped at an asynchronous
safe point. This means there are frames on the stack
without precise pointer information.
atomicstatus atomic.Uint32
// cgo traceback context
gcAssistBytes is this G's GC assist credit in terms of
bytes allocated. If this is positive, then the G has credit
to allocate gcAssistBytes bytes without assisting. If this
is negative, then the G must correct this by performing
scan work. We track this in bytes to make it fast to update
and check for debt in the malloc hot path. The assist ratio
determines how this corresponds to scan work debt.
// g has scanned stack; protected by _Gscan bit in status
goid uint64
// pc of go statement that created this goroutine
goroutineProfiled indicates the status of this goroutine's stack for the
current in-progress goroutine profile
// profiler labels
lockedm muintptr
// current m; offset known to arm liblink
// panic (instead of crash) on unexpected fault address
param is a generic pointer parameter field used to pass
values in particular contexts where other storage for the
parameter would be difficult to find. It is currently used
in three ways:
1. When a channel operation wakes up a blocked goroutine, it sets param to
point to the sudog of the completed blocking operation.
2. By gcAssistAlloc1 to signal back to its caller that the goroutine completed
the GC cycle. It is unsafe to do so in any other way, because the goroutine's
stack may have moved in the meantime.
3. By debugCallWrap to pass parameters to a new goroutine because allocating a
closure in the runtime is forbidden.
// goid of goroutine that created this goroutine
parkingOnChan indicates that the goroutine is about to
park on a chansend or chanrecv. Used to signal an unsafe point
for stack shrinking.
// preemption signal, duplicates stackguard0 = stackpreempt
// shrink stack at synchronous safe point
// transition to _Gpreempted on preemption; otherwise, just deschedule
racectx uintptr
// ignore race detection events
// the amount of time spent runnable, cleared when running, only used when tracking
sched gobuf
schedlink guintptr
// are we participating in a select and did someone win the race?
sig uint32
sigcode0 uintptr
sigcode1 uintptr
sigpc uintptr
Stack parameters.
stack describes the actual stack memory: [stack.lo, stack.hi).
stackguard0 is the stack pointer compared in the Go stack growth prologue.
It is stack.lo+StackGuard normally, but can be StackPreempt to trigger a preemption.
stackguard1 is the stack pointer compared in the C stack growth prologue.
It is stack.lo+StackGuard on g0 and gsignal stacks.
It is ~0 on other goroutine stacks, to trigger a call to morestackc (and crash).
// offset known to runtime/cgo
// sigprof/scang lock; TODO: fold in to atomicstatus
// offset known to liblink
// offset known to liblink
// pc of goroutine function
// expected sp at top of stack, to check in traceback
// if status==Gsyscall, syscallpc = sched.pc to use during gc
// if status==Gsyscall, syscallsp = sched.sp to use during gc
// must not split stack
// cached timer for time.Sleep
Per-G tracer state.
// whether we're tracking this G for sched latency statistics
// used to decide whether to track this G
// timestamp of when the G last started being tracked
// sudog structures this g is waiting on (that have a valid elem ptr); in lock order
// if status==Gwaiting
// approx time when the g become blocked
writebuf []byte
(*g) guintptr() guintptr
func allGsSnapshot() []*g
func atomicAllG() (**g, uintptr)
func atomicAllGIndex(ptr **g, i uintptr) *g
func beforeIdle(int64, int64) (*g, bool)
func checkIdleGCNoP() (*p, *g)
func deductAssistCredit(size uintptr) *g
func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
func findRunnable() (gp *g, inheritTime, tryWakeP bool)
func getg() *g
func gfget(pp *p) *g
func globrunqget(pp *p, max int32) *g
func malg(stacksize int32) *g
func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g
func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g
func runqget(pp *p) (gp *g, inheritTime bool)
func runqsteal(pp, p2 *p, stealRunNextG bool) *g
func sigFetchG(c *sigctxt) *g
func stealWork(now int64) (gp *g, inheritTime bool, rnow, pollUntil int64, newWork bool)
func traceReader() *g
func traceReaderAvailable() *g
func wakefing() *g
func addOneOpenDeferFrame(gp *g, pc uintptr, sp unsafe.Pointer)
func adjustctxt(gp *g, adjinfo *adjustinfo)
func adjustdefers(gp *g, adjinfo *adjustinfo)
func adjustpanics(gp *g, adjinfo *adjustinfo)
func adjustsudogs(gp *g, adjinfo *adjustinfo)
func allgadd(gp *g)
func atomicAllGIndex(ptr **g, i uintptr) *g
func casfrom_Gscanstatus(gp *g, oldval, newval uint32)
func casgcopystack(gp *g) uint32
func casGFromPreempted(gp *g, old, new uint32) bool
func casgstatus(gp *g, oldval, newval uint32)
func casGToPreemptScan(gp *g, old, new uint32)
func casGToWaiting(gp *g, old uint32, reason waitReason)
func castogscanstatus(gp *g, oldval, newval uint32) bool
func chanparkcommit(gp *g, chanLock unsafe.Pointer) bool
func copystack(gp *g, newsize uintptr)
func dopanic_m(gp *g, pc, sp uintptr) bool
func doRecordGoroutineProfile(gp1 *g)
func doSigPreempt(gp *g, ctxt *sigctxt)
func dumpgoroutine(gp *g)
func dumpgstatus(gp *g)
func execute(gp *g, inheritTime bool)
func exitsyscall0(gp *g)
func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
func finalizercommit(gp *g, lock unsafe.Pointer) bool
func findsghi(gp *g, stk stack) uintptr
func gcallers(gp *g, skip int, pcbuf []uintptr) int
func gcAssistAlloc(gp *g)
func gcAssistAlloc1(gp *g, scanWork int64)
func gfput(pp *p, gp *g)
func globrunqput(gp *g)
func globrunqputhead(gp *g)
func goexit0(gp *g)
func gopreempt_m(gp *g)
func goready(gp *g, traceskip int)
func goroutineheader(gp *g)
func gosched_m(gp *g)
func goschedguarded_m(gp *g)
func goschedImpl(gp *g)
func goyield_m(gp *g)
func isAsyncSafePoint(gp *g, pc, sp, lr uintptr) (bool, uintptr)
func isShrinkStackSafe(gp *g) bool
func isSystemGoroutine(gp *g, fixed bool) bool
func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool
func netpollgoready(gp *g, traceskip int)
func newproc1(fn *funcval, callergp *g, callerpc uintptr) *g
func park_m(gp *g)
func parkunlock_c(gp *g, lock unsafe.Pointer) bool
func preemptPark(gp *g)
func printcreatedby(gp *g)
func raceacquireg(gp *g, addr unsafe.Pointer)
func racereleaseacquireg(gp *g, addr unsafe.Pointer)
func racereleaseg(gp *g, addr unsafe.Pointer)
func racereleasemergeg(gp *g, addr unsafe.Pointer)
func readgstatus(gp *g) uint32
func ready(gp *g, traceskip int, next bool)
func recovery(gp *g)
func resetForSleep(gp *g, ut unsafe.Pointer) bool
func runqput(pp *p, gp *g, next bool)
func runqputslow(pp *p, gp *g, h, t uint32) bool
func saveAncestors(callergp *g) *[]ancestorInfo
func saveg(pc, sp uintptr, gp *g, r *StackRecord)
func scanstack(gp *g, gcw *gcWork) int64
func schedEnabled(gp *g) bool
func selparkcommit(gp *g, _ unsafe.Pointer) bool
func setg(gg *g)
func setGNoWB(gp **g, new *g)
func setGNoWB(gp **g, new *g)
func shouldPushSigpanic(gp *g, pc, lr uintptr) bool
func showframe(sf srcFunc, gp *g, firstFrame bool, calleeID abi.FuncID) bool
func shrinkstack(gp *g)
func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
func startlockedm(gp *g)
func suspendG(gp *g) suspendGState
func syncadjustsudogs(gp *g, used uintptr, adjinfo *adjustinfo) uintptr
func traceback(pc, sp, lr uintptr, gp *g)
func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)
func tracebackothers(me *g)
func tracebacktrap(pc, sp, lr uintptr, gp *g)
func traceCPUSample(gp *g, pp *p, stk []uintptr)
func traceGoCreate(newg *g, pc uintptr)
func traceGoUnpark(gp *g, skip int)
func traceOneNewExtraM(gp *g)
func tryRecordGoroutineProfile(gp1 *g, yield func())
func tryRecordGoroutineProfileWB(gp1 *g)
func wantAsyncPreempt(gp *g) bool
var fing *g
var g0
gcBgMarkWorkerNode is an entry in the gcBgMarkWorkerPool. It points to a single
gcBgMarkWorker goroutine.
The g of this worker.
Release this m on park. This is used to communicate with the unlock
function, which cannot access the G's stack. It is unused outside of
gcBgMarkWorker().
Unused workers are managed in a lock-free stack. This field must be first.
gcBits is an alloc/mark bitmap. This is always used as gcBits.x.
x uint8
bitp returns a pointer to the byte containing bit n and a mask for
selecting that bit from *bytep.
bytep returns a pointer to the n'th byte of b.
func newAllocBits(nelems uintptr) *gcBits
func newMarkBits(nelems uintptr) *gcBits
bits [65520]gcBits
gcBitsHeader // side step recursive type bug (issue 14620) by including fields by hand.
// free is the index into bits of the next free byte; read/write atomically
next *gcBitsArena
tryAlloc allocates from b or returns nil if b does not have enough room.
This is safe to call concurrently.
func newArenaMayUnlock() *gcBitsArena
// free is the index into bits of the next free byte.
// *gcBits triggers recursive type bug. (issue 14620)
assistBytesPerWork is 1/assistWorkPerByte.
Note that because this is read and written independently
from assistWorkPerByte users may notice a skew between
the two values, and such a state should be safe.
assistTime is the nanoseconds spent in mutator assists
during this cycle. This is updated atomically, and must also
be updated atomically even during a STW, because it is read
by sysmon. Updates occur in bounded batches, since it is both
written and read throughout the cycle.
assistWorkPerByte is the ratio of scan work to allocated
bytes that should be performed by mutator assists. This is
computed at the beginning of each cycle and updated every
time heapScan is updated.
bgScanCredit is the scan work credit accumulated by the concurrent
background scan. This credit is accumulated by the background scan
and stolen by mutator assists. Updates occur in bounded batches,
since it is both written and read throughout the cycle.
consMark is the estimated per-CPU consMark ratio for the application.
It represents the ratio between the application's allocation
rate, as bytes allocated per CPU-time, and the GC's scan rate,
as bytes scanned per CPU-time.
The units of this ratio are (B / cpu-ns) / (B / cpu-ns).
At a high level, this value is computed as the bytes of memory
allocated (cons) per unit of scan work completed (mark) in a GC
cycle, divided by the CPU time spent on each activity.
Updated at the end of each GC cycle, in endCycle.
dedicatedMarkTime is the nanoseconds spent in dedicated mark workers
during this cycle. This is updated at the end of the concurrent mark
phase.
dedicatedMarkWorkersNeeded is the number of dedicated mark workers
that need to be started. This is computed at the beginning of each
cycle and decremented as dedicated mark workers get started.
fractionalMarkTime is the nanoseconds spent in the fractional mark
worker during this cycle. This is updated throughout the cycle and
will be up-to-date if the fractional mark worker is not currently
running.
fractionalUtilizationGoal is the fraction of wall clock
time that should be spent in the fractional mark worker on
each P that isn't running a dedicated worker.
For example, if the utilization goal is 25% and there are
no dedicated workers, this will be 0.25. If the goal is
25%, there is one dedicated worker, and GOMAXPROCS is 5,
this will be 0.05 to make up the missing 5%.
If this is zero, no fractional workers are needed.
Initialized from GOGC. GOGC=off means no GC.
gcPercentHeapGoal is the goal heapLive for when next GC ends derived
from gcPercent.
Set to ^uint64(0) if gcPercent is disabled.
globalsScan is the total amount of global variable space
that is scannable.
globalsScanWork atomic.Int64
// bytes not in any span, but not released to the OS
These memory stats are effectively duplicates of fields from
memstats.heapStats but are updated atomically or with the world
stopped and don't provide the same consistency guarantees.
Because the runtime is responsible for managing a memory limit, it's
useful to couple these stats more tightly to the gcController, which
is intimately connected to how that memory limit is maintained.
// bytes in mSpanInUse spans
heapLive is the number of bytes considered live by the GC.
That is: retained by the most recent GC plus allocated
since then. heapLive ≤ memstats.totalAlloc-memstats.totalFree, since
heapAlloc includes unmarked objects that have not yet been swept (and
hence goes up as we allocate and down as we sweep) while heapLive
excludes these objects (and hence only goes up between GCs).
To reduce contention, this is updated only when obtaining a span
from an mcentral and at this point it counts all of the unallocated
slots in that span (which will be allocated before that mcache
obtains another span from that mcentral). Hence, it slightly
overestimates the "true" live heap size. It's better to overestimate
than to underestimate because 1) this triggers the GC earlier than
necessary rather than potentially too late and 2) this leads to a
conservative GC rate rather than a GC rate that is potentially too
low.
Whenever this is updated, call traceHeapAlloc() and
this gcControllerState's revise() method.
heapMarked is the number of bytes marked by the previous
GC. After mark termination, heapLive == heapMarked, but
unlike heapLive, heapMarked does not change until the
next mark termination.
heapMinimum is the minimum heap size at which to trigger GC.
For small heaps, this overrides the usual GOGC*live set rule.
When there is a very small live set but a lot of allocation, simply
collecting when the heap reaches GOGC*live results in many GC
cycles and high total per-GC overhead. This minimum amortizes this
per-GC overhead while keeping the heap reasonably small.
During initialization this is set to 4MB*GOGC/100. In the case of
GOGC==0, this will set heapMinimum to 0, resulting in constant
collection even when the heap size is small, which is useful for
debugging.
// bytes released to the OS
heapScan is the number of bytes of "scannable" heap. This is the
live heap (as counted by heapLive), but omitting no-scan objects and
no-scan tails of objects.
This value is fixed at the start of a GC cycle. It represents the
maximum scannable heap.
heapScanWork is the total heap scan work performed this cycle.
stackScanWork is the total stack scan work performed this cycle.
globalsScanWork is the total globals scan work performed this cycle.
These are updated atomically during the cycle. Updates occur in
bounded batches, since they are both written and read
throughout the cycle. At the end of the cycle, heapScanWork is how
much of the retained heap is scannable.
Currently these are measured in bytes. For most uses, this is an
opaque unit of work, but for estimation the definition is important.
Note that stackScanWork includes only stack space scanned, not all
of the allocated stack.
idleMarkTime is the nanoseconds spent in idle marking during this
cycle. This is updated throughout the cycle.
idleMarkWorkers is two packed int32 values in a single uint64.
These two values are always updated simultaneously.
The bottom int32 is the current number of idle mark workers executing.
The top int32 is the maximum number of idle mark workers allowed to
execute concurrently. Normally, this number is just gomaxprocs. However,
during periodic GC cycles it is set to 0 because the system is idle
anyway; there's no need to go full blast on all of GOMAXPROCS.
The maximum number of idle mark workers is used to prevent new workers
from starting, but it is not a hard maximum. It is possible (but
exceedingly rare) for the current number of idle mark workers to
transiently exceed the maximum. This could happen if the maximum changes
just after a GC ends, and an M with no P.
Note that if we have no dedicated mark workers, we set this value to
1 in this case we only have fractional GC workers which aren't scheduled
strictly enough to ensure GC progress. As a result, idle-priority mark
workers are vital to GC progress in these situations.
For example, consider a situation in which goroutines block on the GC
(such as via runtime.GOMAXPROCS) and only fractional mark workers are
scheduled (e.g. GOMAXPROCS=1). Without idle-priority mark workers, the
last running M might skip scheduling a fractional mark worker if its
utilization goal is met, such that once it goes to sleep (because there's
nothing to do), there will be nothing else to spin up a new M for the
fractional worker in the future, stalling GC progress and causing a
deadlock. However, idle-priority workers will *always* run when there is
nothing left to do, ensuring the GC makes progress.
See github.com/golang/go/issues/44163 for more details.
lastConsMark is the computed cons/mark value for the previous 4 GC
cycles. Note that this is *not* the last value of consMark, but the
measured cons/mark value in endCycle.
lastHeapGoal is the value of heapGoal at the moment the last GC
ended. Note that this is distinct from the last value heapGoal had,
because it could change if e.g. gcPercent changes.
Read and written with the world stopped or with mheap_.lock held.
lastHeapScan is the number of bytes of heap that were scanned
last GC cycle. It is the same as heapMarked, but only
includes the "scannable" parts of objects.
Updated when the world is stopped.
lastStackScan is the number of bytes of stack that were scanned
last GC cycle.
// total virtual memory in the Ready state (see mem.go).
markStartTime is the absolute start time in nanoseconds
that assists and background mark workers started.
maxStackScan is the amount of allocated goroutine stack space in
use by goroutines.
This number tracks allocated goroutine stack space rather than used
goroutine stack space (i.e. what is actually scanned) because used
goroutine stack space is much harder to measure cheaply. By using
allocated space, we make an overestimate; this is OK, it's better
to conservatively overcount than undercount.
memoryLimit is the soft memory limit in bytes.
Initialized from GOMEMLIMIT. GOMEMLIMIT=off is equivalent to MaxInt64
which means no soft memory limit in practice.
This is an int64 instead of a uint64 to more easily maintain parity with
the SetMemoryLimit API, which sets a maximum at MaxInt64. This value
should never be negative.
runway is the amount of runway in heap bytes allocated by the
application that we want to give the GC once it starts.
This is computed from consMark during mark termination.
stackScanWork atomic.Int64
sweepDistMinTrigger is the minimum trigger to ensure a minimum
sweep distance.
This bound is also special because it applies to both the trigger
*and* the goal (all other trigger bounds must be based *on* the goal).
It is computed ahead of time, at commit time. The theory is that,
absent a sudden change to a parameter like gcPercent, the trigger
will be chosen to always give the sweeper enough headroom. However,
such a change might dramatically and suddenly move up the trigger,
in which case we need to ensure the sweeper still has enough headroom.
test indicates that this is a test-only copy of gcControllerState.
// total bytes allocated
// total bytes freed
triggered is the point at which the current GC cycle actually triggered.
Only valid during the mark phase of a GC cycle, otherwise set to ^uint64(0).
Updated while the world is stopped.
(*gcControllerState) addGlobals(amount int64)
addIdleMarkWorker attempts to add a new idle mark worker.
If this returns true, the caller must become an idle mark worker unless
there's no background mark worker goroutines in the pool. This case is
harmless because there are already background mark workers running.
If this returns false, the caller must NOT become an idle mark worker.
nosplit because it may be called without a P.
(*gcControllerState) addScannableStack(pp *p, amount int64)
commit recomputes all pacing parameters needed to derive the
trigger and the heap goal. Namely, the gcPercent-based heap goal,
and the amount of runway we want to give the GC this cycle.
This can be called any time. If GC is the in the middle of a
concurrent phase, it will adjust the pacing of that phase.
isSweepDone should be the result of calling isSweepDone(),
unless we're testing or we know we're executing during a GC cycle.
This depends on gcPercent, gcController.heapMarked, and
gcController.heapLive. These must be up to date.
Callers must call gcControllerState.revise after calling this
function if the GC is enabled.
mheap_.lock must be held or the world must be stopped.
endCycle computes the consMark estimate for the next cycle.
userForced indicates whether the current GC cycle was forced
by the application.
enlistWorker encourages another dedicated mark worker to start on
another P if there are spare worker slots. It is used by putfull
when more work is made available.
findRunnableGCWorker returns a background mark worker for pp if it
should be run. This must only be called when gcBlackenEnabled != 0.
heapGoal returns the current heap goal.
heapGoalInternal is the implementation of heapGoal which returns additional
information that is necessary for computing the trigger.
The returned minTrigger is always <= goal.
(*gcControllerState) init(gcPercent int32, memoryLimit int64)
markWorkerStop must be called whenever a mark worker stops executing.
It updates mark work accounting in the controller by a duration of
work in nanoseconds and other bookkeeping.
Safe to execute at any time.
memoryLimitHeapGoal returns a heap goal derived from memoryLimit.
needIdleMarkWorker is a hint as to whether another idle mark worker is needed.
The caller must still call addIdleMarkWorker to become one. This is mainly
useful for a quick check before an expensive operation.
nosplit because it may be called without a P.
removeIdleMarkWorker must be called when an new idle mark worker stops executing.
resetLive sets up the controller state for the next mark phase after the end
of the previous one. Must be called after endCycle and before commit, before
the world is started.
The world must be stopped.
revise updates the assist ratio during the GC cycle to account for
improved estimates. This should be called whenever gcController.heapScan,
gcController.heapLive, or if any inputs to gcController.heapGoal are
updated. It is safe to call concurrently, but it may race with other
calls to revise.
The result of this race is that the two assist ratio values may not line
up or may be stale. In practice this is OK because the assist ratio
moves slowly throughout a GC cycle, and the assist ratio is a best-effort
heuristic anyway. Furthermore, no part of the heuristic depends on
the two assist ratio values being exact reciprocals of one another, since
the two values are used to convert values from different sources.
The worst case result of this raciness is that we may miss a larger shift
in the ratio (say, if we decide to pace more aggressively against the
hard heap goal) but even this "hard goal" is best-effort (see #40460).
The dedicated GC should ensure we don't exceed the hard goal by too much
in the rare case we do exceed it.
It should only be called when gcBlackenEnabled != 0 (because this
is when assists are enabled and the necessary statistics are
available).
setGCPercent updates gcPercent. commit must be called after.
Returns the old value of gcPercent.
The world must be stopped, or mheap_.lock must be held.
setMaxIdleMarkWorkers sets the maximum number of idle mark workers allowed.
This method is optimistic in that it does not wait for the number of
idle mark workers to reduce to max before returning; it assumes the workers
will deschedule themselves.
setMemoryLimit updates memoryLimit. commit must be called after
Returns the old value of memoryLimit.
The world must be stopped, or mheap_.lock must be held.
startCycle resets the GC controller's state and computes estimates
for a new GC cycle. The caller must hold worldsema and the world
must be stopped.
trigger returns the current point at which a GC should trigger along with
the heap goal.
The returned value may be compared against heapLive to determine whether
the GC should trigger. Thus, the GC trigger condition should be (but may
not be, in the case of small movements for efficiency) checked whenever
the heap goal may change.
(*gcControllerState) update(dHeapLive, dHeapScan int64)
var gcController
assistTimePool is the accumulated assist time since the last update.
bucket struct{fill, capacity uint64}
enabled atomic.Bool
gcEnabled is an internal copy of gcBlackenEnabled that determines
whether the limiter tracks total assist time.
gcBlackenEnabled isn't used directly so as to keep this structure
unit-testable.
idleMarkTimePool is the accumulated idle mark time since the last update.
idleTimePool is the accumulated time Ps spent on the idle list since the last update.
lastEnabledCycle is the GC cycle that last had the limiter enabled.
lastUpdate is the nanotime timestamp of the last time update was called.
Updated under lock, but may be read concurrently.
lock atomic.Uint32
nprocs is an internal copy of gomaxprocs, used to determine total available
CPU time.
gomaxprocs isn't used directly so as to keep this structure unit-testable.
overflow is the cumulative amount of GC CPU time that we tried to fill the
bucket with but exceeded its capacity.
test indicates whether this instance of the struct was made for testing purposes.
transitioning is true when the GC is in a STW and transitioning between
the mark and sweep phases.
accumulate adds time to the bucket and signals whether the limiter is enabled.
This is an internal function that deals just with the bucket. Prefer update.
l.lock must be held.
addAssistTime notifies the limiter of additional assist time. It will be
included in the next update.
addIdleTime notifies the limiter of additional time a P spent on the idle list. It will be
subtracted from the total CPU time in the next update.
finishGCTransition notifies the limiter that the GC transition is complete
and releases ownership of it. It also accumulates STW time in the bucket.
now must be the timestamp from the end of the STW pause.
limiting returns true if the CPU limiter is currently enabled, meaning the Go GC
should take action to limit CPU utilization.
It is safe to call concurrently with other operations.
needUpdate returns true if the limiter's maximum update period has been
exceeded, and so would benefit from an update.
resetCapacity updates the capacity based on GOMAXPROCS. Must not be called
while the GC is enabled.
It is safe to call concurrently with other operations.
startGCTransition notifies the limiter of a GC transition.
This call takes ownership of the limiter and disables all other means of
updating the limiter. Release ownership by calling finishGCTransition.
It is safe to call concurrently with other operations.
tryLock attempts to lock l. Returns true on success.
unlock releases the lock on l. Must be called if tryLock returns true.
update updates the bucket given runtime-specific information. now is the
current monotonic time in nanoseconds.
This is safe to call concurrently with other operations, except *GCTransition.
updateLocked is the implementation of update. l.lock must be held.
var gcCPULimiter
func gcDrain(gcw *gcWork, flags gcDrainFlags)
const gcDrainFlushBgCredit
const gcDrainFractional
const gcDrainIdle
const gcDrainUntilPreempt
A gclink is a node in a linked list of blocks, like mlink,
but it is opaque to the garbage collector.
The GC does not trace the pointers during collection,
and the compiler does not emit write barriers for assignments
of gclinkptr values. Code should store references to gclinks
as gclinkptr, not as *gclink.
next gclinkptr
A gclinkptr is a pointer to a gclink, but it is opaque
to the garbage collector.
ptr returns the *gclink form of p.
The result should be used for accessing fields, not stored
in other data structures.
func nextFreeFast(s *mspan) gclinkptr
func stackpoolalloc(order uint8) gclinkptr
func stackpoolfree(x gclinkptr, order uint8)
gcMarkWorkerMode represents the mode that a concurrent mark worker
should operate in.
Concurrent marking happens through four different mechanisms. One
is mutator assists, which happen in response to allocations and are
not scheduled. The other three are variations in the per-P mark
workers and are distinguished by gcMarkWorkerMode.
const gcMarkWorkerDedicatedMode
const gcMarkWorkerFractionalMode
const gcMarkWorkerIdleMode
const gcMarkWorkerNotWorker
gcMode indicates how concurrent a GC cycle should be.
func gcSweep(mode gcMode)
const gcBackgroundMode
const gcForceBlockMode
const gcForceMode
gcStatsAggregate represents various GC stats obtained from the runtime
acquired together to avoid skew and inconsistencies.
globalsScan uint64
heapScan uint64
stackScan uint64
totalScan uint64
compute populates the gcStatsAggregate with values from the runtime.
A gcTrigger is a predicate for starting a GC cycle. Specifically,
it is an exit condition for the _GCoff phase.
kind gcTriggerKind
// gcTriggerCycle: cycle number to start
// gcTriggerTime: current time
test reports whether the trigger condition is satisfied, meaning
that the exit condition for the _GCoff phase has been met. The exit
condition should be tested when allocating.
func gcStart(trigger gcTrigger)
const gcTriggerCycle
const gcTriggerHeap
const gcTriggerTime
A gcWork provides the interface to produce and consume work for the
garbage collector.
A gcWork can be used on the stack as follows:
(preemption must be disabled)
gcw := &getg().m.p.ptr().gcw
.. call gcw.put() to produce and gcw.tryGet() to consume ..
It's important that any use of gcWork during the mark phase prevent
the garbage collector from transitioning to mark termination since
gcWork may locally hold GC work buffers. This can be done by
disabling preemption (systemstack or acquirem).
Bytes marked (blackened) on this gcWork. This is aggregated
into work.bytesMarked by dispose.
flushedWork indicates that a non-empty work buffer was
flushed to the global work list since the last gcMarkDone
termination check. Specifically, this indicates that this
gcWork may have communicated work to another gcWork.
Heap scan work performed on this gcWork. This is aggregated into
gcController by dispose and may also be flushed by callers.
Other types of scan work are flushed immediately.
wbuf1 and wbuf2 are the primary and secondary work buffers.
This can be thought of as a stack of both work buffers'
pointers concatenated. When we pop the last pointer, we
shift the stack up by one work buffer by bringing in a new
full buffer and discarding an empty one. When we fill both
buffers, we shift the stack down by one work buffer by
bringing in a new empty buffer and discarding a full one.
This way we have one buffer's worth of hysteresis, which
amortizes the cost of getting or putting a work buffer over
at least one buffer of work and reduces contention on the
global work lists.
wbuf1 is always the buffer we're currently pushing to and
popping from and wbuf2 is the buffer that will be discarded
next.
Invariant: Both wbuf1 and wbuf2 are nil or neither are.
wbuf1 and wbuf2 are the primary and secondary work buffers.
This can be thought of as a stack of both work buffers'
pointers concatenated. When we pop the last pointer, we
shift the stack up by one work buffer by bringing in a new
full buffer and discarding an empty one. When we fill both
buffers, we shift the stack down by one work buffer by
bringing in a new empty buffer and discarding a full one.
This way we have one buffer's worth of hysteresis, which
amortizes the cost of getting or putting a work buffer over
at least one buffer of work and reduces contention on the
global work lists.
wbuf1 is always the buffer we're currently pushing to and
popping from and wbuf2 is the buffer that will be discarded
next.
Invariant: Both wbuf1 and wbuf2 are nil or neither are.
balance moves some work that's cached in this gcWork back on the
global queue.
dispose returns any cached pointers to the global queue.
The buffers are being put on the full queue so that the
write barriers will not simply reacquire them before the
GC can inspect them. This helps reduce the mutator's
ability to hide pointers during the concurrent mark phase.
empty reports whether w has no mark work available.
(*gcWork) init()
put enqueues a pointer for the garbage collector to trace.
obj must point to the beginning of a heap object or an oblet.
putBatch performs a put on every pointer in obj. See put for
constraints on these pointers.
putFast does a put and reports whether it can be done quickly
otherwise it returns false and the caller needs to call put.
tryGet dequeues a pointer for the garbage collector to trace.
If there are no pointers remaining in this gcWork or in the global
queue, tryGet returns 0. Note that there may still be pointers in
other gcWork instances or other caches.
tryGetFast dequeues a pointer for the garbage collector to trace
if one is readily available. Otherwise it returns 0 and
the caller is expected to call tryGet().
func gcDrain(gcw *gcWork, flags gcDrainFlags)
func gcDrainN(gcw *gcWork, scanWork int64) int64
func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
func markroot(gcw *gcWork, i uint32, flushBgCredit bool) int64
func markrootBlock(b0, n0 uintptr, ptrmask0 *uint8, gcw *gcWork, shard int) int64
func markrootSpans(gcw *gcWork, shard int)
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
func scanobject(b uintptr, gcw *gcWork)
func scanstack(gp *g, gcw *gcWork) int64
A gList is a list of Gs linked through g.schedlink. A G can only be
on one gQueue or gList at a time.
head guintptr
empty reports whether l is empty.
pop removes and returns the head of l. If l is empty, it returns nil.
push adds gp to the head of l.
pushAll prepends all Gs in q to l.
func netpoll(delay int64) gList
func injectglist(glist *gList)
func netpollready(toRun *gList, pd *pollDesc, mode int32)
// for framepointer-enabled architectures
ctxt unsafe.Pointer
g guintptr
lr uintptr
pc uintptr
ret uintptr
The offsets of sp, pc, and g are known to (hard-coded in) libmach.
ctxt is unusual with respect to GC: it may be a
heap-allocated funcval, so GC needs to track it, but it
needs to be set and cleared from assembly, where it's
difficult to have write barriers. However, ctxt is really a
saved, live register, and we only ever exchange it between
the real register and the gobuf. Hence, we treat it as a
root during stack scanning, which means assembly that saves
and restores it doesn't need write barriers. It's still
typed as a pointer so that any other writes from Go get
write barriers.
func gogo(buf *gobuf)
func gostartcall(buf *gobuf, fn, ctxt unsafe.Pointer)
func gostartcallfn(gobuf *gobuf, fv *funcval)
A godebugInc provides access to internal/godebug's IncNonDefault function
for a given GODEBUG setting.
Calls before internal/godebug registers itself are dropped on the floor.
inc atomic.Pointer[func()]
name string
(*godebugInc) IncNonDefault()
var panicnil *godebugInc
goroutineProfileState indicates the status of a goroutine's stack for the
current in-progress goroutine profile. Goroutines' stacks are initially
"Absent" from the profile, and end up "Satisfied" by the time the profile is
complete. While a goroutine's stack is being captured, its
goroutineProfileState will be "InProgress" and it will not be able to run
until the capture completes and the state moves to "Satisfied".
Some goroutines (the finalizer goroutine, which at various times can be
either a "system" or a "user" goroutine, and the goroutine that is
coordinating the profile, any goroutines created during the profile) move
directly to the "Satisfied" state.
const goroutineProfileAbsent
const goroutineProfileInProgress
const goroutineProfileSatisfied
noCopy atomic.noCopy
value uint32
(*goroutineProfileStateHolder) CompareAndSwap(old, new goroutineProfileState) bool
(*goroutineProfileStateHolder) Load() goroutineProfileState
(*goroutineProfileStateHolder) Store(value goroutineProfileState)
A gQueue is a dequeue of Gs linked through g.schedlink. A G can only
be on one gQueue or gList at a time.
head guintptr
tail guintptr
empty reports whether q is empty.
pop removes and returns the head of queue q. It returns nil if
q is empty.
popList takes all Gs in q and returns them as a gList.
push adds gp to the head of q.
pushBack adds gp to the tail of q.
pushBackAll adds all Gs in q2 to the tail of q. After this q2 must
not be used.
func runqdrain(pp *p) (drainQ gQueue, n uint32)
func globrunqputbatch(batch *gQueue, n int32)
func runqputbatch(pp *p, q *gQueue, qsize int)
gsignalStack saves the fields of the gsignal stack changed by
setGsignalStack.
stack stack
stackguard0 uintptr
stackguard1 uintptr
stktopsp uintptr
func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
func restoreGsignalStack(st *gsignalStack)
func setGsignalStack(st *stackt, old *gsignalStack)
gTraceState is per-G state for the tracer.
// last P emitted an event for this goroutine
// trace event sequencer
// timestamp when syscall has returned
// syscall or cgo was entered while trace was enabled or StartTrace has emitted EvGoInSyscall about this goroutine
A guintptr holds a goroutine pointer, but typed as a uintptr
to bypass write barriers. It is used in the Gobuf goroutine state
and in scheduling lists that are manipulated without a P.
The Gobuf.g goroutine pointer is almost always updated by assembly code.
In one of the few places it is updated by Go code - func save - it must be
treated as a uintptr to avoid a write barrier being emitted at a bad time.
Instead of figuring out how to emit the write barriers missing in the
assembly manipulation, we change the type of the field to uintptr,
so that it does not require write barriers at all.
Goroutine structs are published in the allg list and never freed.
That will keep the goroutine structs from being collected.
There is never a time that Gobuf.g's contain the only references
to a goroutine: the publishing of the goroutine in allg comes first.
Goroutine pointers are also kept in non-GC-visible places like TLS,
so I can't see them ever moving. If we did want to start moving data
in the GC, we'd need to allocate the goroutine structs from an
alternate arena. Using guintptr doesn't make that problem any worse.
Note that pollDesc.rg, pollDesc.wg also store g in uintptr form,
so they would need to be updated too if g's start moving.
(*guintptr) cas(old, new guintptr) bool
( guintptr) ptr() *g
(*guintptr) set(g *g)
func runqgrab(pp *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
// points to an array of dataqsiz elements
closed uint32
// size of the circular queue
elemsize uint16
// element type
lock protects all fields in hchan, as well as several
fields in sudogs blocked on this channel.
Do not change another G's status while holding this lock
(in particular, do not ready a G), as this can deadlock
with stack shrinking.
// total data in the queue
// list of recv waiters
// receive index
// list of send waiters
// send index
(*hchan) raceaddr() unsafe.Pointer
(*hchan) sortkey() uintptr
func makechan(t *chantype, size int) *hchan
func makechan64(t *chantype, size int64) *hchan
func reflect_makechan(t *chantype, size int) *hchan
func chanbuf(c *hchan, i uint) unsafe.Pointer
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool)
func chanrecv1(c *hchan, elem unsafe.Pointer)
func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool)
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool
func chansend1(c *hchan, elem unsafe.Pointer)
func closechan(c *hchan)
func empty(c *hchan) bool
func full(c *hchan) bool
func racenotify(c *hchan, idx uint, sg *sudog)
func racesync(c *hchan, sg *sudog)
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func reflect_chancap(c *hchan) int
func reflect_chanclose(c *hchan)
func reflect_chanlen(c *hchan) int
func reflect_chanrecv(c *hchan, nb bool, elem unsafe.Pointer) (selected bool, received bool)
func reflect_chansend(c *hchan, elem unsafe.Pointer, nb bool) (selected bool)
func reflectlite_chanlen(c *hchan) int
func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected, received bool)
func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool)
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
headTailIndex represents a combined 32-bit head and 32-bit tail
of a queue into a single 64-bit value.
head returns the head of a headTailIndex value.
split splits the headTailIndex value into its parts.
tail returns the tail of a headTailIndex value.
func makeHeadTailIndex(head, tail uint32) headTailIndex
A heapArena stores metadata for a heap arena. heapArenas are stored
outside of the Go heap and accessed via the mheap_.arenas index.
bitmap stores the pointer/scalar bitmap for the words in
this arena. See mbitmap.go for a description.
This array uses 1 bit per word of heap, or 1.6% of the heap size (for 64-bit).
checkmarks stores the debug.gccheckmark state. It is only
used if debug.gccheckmark > 0.
If the ith bit of noMorePtrs is true, then there are no more
pointers for the object containing the word described by the
high bit of bitmap[i].
In that case, bitmap[i+1], ... must be zero until the start
of the next object.
We never operate on these entries using bit-parallel techniques,
so it is ok if they are small. Also, they can't be bigger than
uint16 because at that size a single noMorePtrs entry
represents 8K of memory, the minimum size of a span. Any larger
and we'd have to worry about concurrent updates.
This array uses 1 bit per word of bitmap, or .024% of the heap size (for 64-bit).
pageInUse is a bitmap that indicates which spans are in
state mSpanInUse. This bitmap is indexed by page number,
but only the bit corresponding to the first page in each
span is used.
Reads and writes are atomic.
pageMarks is a bitmap that indicates which spans have any
marked objects on them. Like pageInUse, only the bit
corresponding to the first page in each span is used.
Writes are done atomically during marking. Reads are
non-atomic and lock-free since they only occur during
sweeping (and hence never race with writes).
This is used to quickly find whole spans that can be freed.
TODO(austin): It would be nice if this was uint64 for
faster scanning, but we don't have 64-bit atomic bit
operations.
pageSpecials is a bitmap that indicates which spans have
specials (finalizers or other). Like pageInUse, only the bit
corresponding to the first page in each span is used.
Writes are done atomically whenever a special is added to
a span and whenever the last special is removed from a span.
Reads are done atomically to find spans containing specials
during marking.
spans maps from virtual address page ID within this arena to *mspan.
For allocated spans, their pages map to the span itself.
For free spans, only the lowest and highest pages map to the span itself.
Internal pages map to an arbitrary span.
For pages that have never been allocated, spans entries are nil.
Modifications are protected by mheap.lock. Reads can be
performed without locking, but ONLY from indexes that are
known to contain in-use or stack spans. This means there
must not be a safe-point between establishing that an
address is live and looking it up in the spans array.
zeroedBase marks the first byte of the first page in this
arena which hasn't been used yet and is therefore already
zero. zeroedBase is relative to the arena base.
Increases monotonically until it hits heapArenaBytes.
This field is sufficient to determine if an allocation
needs to be zeroed because the page allocator follows an
address-ordered first-fit policy.
Read atomically and written with an atomic CAS.
func pageIndexOf(p uintptr) (arena *heapArena, pageIdx uintptr, pageMask uint8)
heapBits provides access to the bitmap bits for a single heap word.
The methods on heapBits take value receivers so that the compiler
can more easily inline calls to those methods and registerize the
struct fields independently.
heapBits will report on pointers in the range [addr,addr+size).
The low bit of mask contains the pointerness of the word at addr
(assuming valid>0).
The next few pointer bits representing words starting at addr.
Those bits already returned by next() are zeroed.
heapBits will report on pointers in the range [addr,addr+size).
The low bit of mask contains the pointerness of the word at addr
(assuming valid>0).
Number of bits in mask that are valid. mask is always less than 1<<valid.
Returns the (absolute) address of the next known pointer and
a heapBits iterator representing any remaining pointers.
If there are no more pointers, returns address 0.
Note that next does not modify h. The caller must record the result.
nosplit because it is used during write barriers and must not be preempted.
nextFast is like next, but can return 0 even when there are more pointers
to be found. Callers should call next if nextFast returns 0 as its second
return value.
if addr, h = h.nextFast(); addr == 0 {
if addr, h = h.next(); addr == 0 {
... no more pointers ...
}
}
... process pointer at addr ...
nextFast is designed to be inlineable.
func heapBitsForAddr(addr, size uintptr) heapBits
heapStatsAggregate represents memory stats obtained from the
runtime. This set of stats is grouped together because they
depend on each other in some way to make sense of the runtime's
current heap memory use. They're also sharded across Ps, so it
makes sense to grab them all at once.
heapStatsDelta heapStatsDelta
Memory stats.
// byte delta of memory committed
// byte delta of memory placed in the heap
// byte delta of memory reserved for unrolled GC prog bits
// byte delta of memory reserved for stacks
// byte delta of memory reserved for work bufs
// bytes allocated for large objects
// number of large object allocations
// bytes freed for large objects (>maxSmallSize)
// number of frees for large objects (>maxSmallSize)
// byte delta of released memory generated
// number of allocs for small objects
// number of frees for small objects (<=maxSmallSize)
Allocator stats.
These are all uint64 because they're cumulative, and could quickly wrap
around otherwise.
// number of tiny allocations
inObjects is the bytes of memory occupied by objects,
numObjects is the number of live objects in the heap.
totalAllocated is the total bytes of heap objects allocated
over the lifetime of the program.
totalAllocs is the number of heap objects allocated over
the lifetime of the program.
totalFreed is the total bytes of heap objects freed
over the lifetime of the program.
totalFrees is the number of heap objects freed over
the lifetime of the program.
compute populates the heapStatsAggregate with values from the runtime.
merge adds in the deltas from b into a.
heapStatsDelta contains deltas of various runtime memory statistics
that need to be updated together in order for them to be kept
consistent with one another.
Memory stats.
// byte delta of memory committed
// byte delta of memory placed in the heap
// byte delta of memory reserved for unrolled GC prog bits
// byte delta of memory reserved for stacks
// byte delta of memory reserved for work bufs
// bytes allocated for large objects
// number of large object allocations
// bytes freed for large objects (>maxSmallSize)
// number of frees for large objects (>maxSmallSize)
// byte delta of released memory generated
// number of allocs for small objects
// number of frees for small objects (<=maxSmallSize)
Allocator stats.
These are all uint64 because they're cumulative, and could quickly wrap
around otherwise.
// number of tiny allocations
merge adds in the deltas from b into a.
The compiler knows that a print of a value of this type
should use printhex instead of printuint (decimal).
A hash iteration structure.
If you modify hiter, also change cmd/compile/internal/reflectdata/reflect.go
and reflect/value.go to match the layout of this structure.
B uint8
// current bucket
bucket uintptr
// bucket ptr at hash_iter initialization time
checkBucket uintptr
// Must be in second position (see cmd/compile/internal/walk/range.go).
h *hmap
i uint8
// Must be in first position. Write nil to indicate iteration end (see cmd/compile/internal/walk/range.go).
// intra-bucket offset to start from during iteration (should be big enough to hold bucketCnt-1)
// keeps overflow buckets of hmap.oldbuckets alive
// keeps overflow buckets of hmap.buckets alive
// bucket iteration started at
t *maptype
// already wrapped around from end of bucket array to beginning
func mapiterinit(t *maptype, h *hmap, it *hiter)
func mapiternext(it *hiter)
func reflect_mapiterelem(it *hiter) unsafe.Pointer
func reflect_mapiterinit(t *maptype, h *hmap, it *hiter)
func reflect_mapiterkey(it *hiter) unsafe.Pointer
func reflect_mapiternext(it *hiter)
A header for a Go map.
// log_2 of # of buckets (can hold up to loadFactor * 2^B items)
// array of 2^B Buckets. may be nil if count==0.
Note: the format of the hmap is also encoded in cmd/compile/internal/reflectdata/reflect.go.
Make sure this stays in sync with the compiler's definition.
// # live cells == size of map. Must be first (used by len() builtin)
// optional fields
flags uint8
// hash seed
// progress counter for evacuation (buckets less than this have been evacuated)
// approximate number of overflow buckets; see incrnoverflow for details
// previous bucket array of half the size, non-nil only when growing
(*hmap) createOverflow()
growing reports whether h is growing. The growth may be to the same size or bigger.
incrnoverflow increments h.noverflow.
noverflow counts the number of overflow buckets.
This is used to trigger same-size map growth.
See also tooManyOverflowBuckets.
To keep hmap small, noverflow is a uint16.
When there are few buckets, noverflow is an exact count.
When there are many buckets, noverflow is an approximate count.
(*hmap) newoverflow(t *maptype, b *bmap) *bmap
noldbuckets calculates the number of buckets prior to the current map growth.
oldbucketmask provides a mask that can be applied to calculate n % noldbuckets().
sameSizeGrow reports whether the current growth is to a map of the same size.
func makemap(t *maptype, hint int, h *hmap) *hmap
func makemap64(t *maptype, hint int64, h *hmap) *hmap
func makemap_small() *hmap
func mapclone2(t *maptype, src *hmap) *hmap
func reflect_makemap(t *maptype, cap int) *hmap
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)
func bucketEvacuated(t *maptype, h *hmap, bucket uintptr) bool
func copyKeys(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func copyValues(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func evacuate(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr)
func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
func growWork(t *maptype, h *hmap, bucket uintptr)
func growWork_fast32(t *maptype, h *hmap, bucket uintptr)
func growWork_fast64(t *maptype, h *hmap, bucket uintptr)
func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
func hashGrow(t *maptype, h *hmap)
func makemap(t *maptype, hint int, h *hmap) *hmap
func makemap64(t *maptype, hint int64, h *hmap) *hmap
func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapaccess1_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapaccess1_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapaccess1_faststr(t *maptype, h *hmap, ky string) unsafe.Pointer
func mapaccess1_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) unsafe.Pointer
func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccess2_fast32(t *maptype, h *hmap, key uint32) (unsafe.Pointer, bool)
func mapaccess2_fast64(t *maptype, h *hmap, key uint64) (unsafe.Pointer, bool)
func mapaccess2_faststr(t *maptype, h *hmap, ky string) (unsafe.Pointer, bool)
func mapaccess2_fat(t *maptype, h *hmap, key, zero unsafe.Pointer) (unsafe.Pointer, bool)
func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer)
func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast32(t *maptype, h *hmap, key uint32) unsafe.Pointer
func mapassign_fast32ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_fast64(t *maptype, h *hmap, key uint64) unsafe.Pointer
func mapassign_fast64ptr(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func mapassign_faststr(t *maptype, h *hmap, s string) unsafe.Pointer
func mapclear(t *maptype, h *hmap)
func mapclone2(t *maptype, src *hmap) *hmap
func mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func mapdelete_fast32(t *maptype, h *hmap, key uint32)
func mapdelete_fast64(t *maptype, h *hmap, key uint64)
func mapdelete_faststr(t *maptype, h *hmap, ky string)
func mapiterinit(t *maptype, h *hmap, it *hiter)
func moveToBmap(t *maptype, h *hmap, dst *bmap, pos int, src *bmap) (*bmap, int)
func reflect_mapaccess(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer
func reflect_mapaccess_faststr(t *maptype, h *hmap, key string) unsafe.Pointer
func reflect_mapassign(t *maptype, h *hmap, key unsafe.Pointer, elem unsafe.Pointer)
func reflect_mapassign_faststr(t *maptype, h *hmap, key string, elem unsafe.Pointer)
func reflect_mapclear(t *maptype, h *hmap)
func reflect_mapdelete(t *maptype, h *hmap, key unsafe.Pointer)
func reflect_mapdelete_faststr(t *maptype, h *hmap, key string)
func reflect_mapiterinit(t *maptype, h *hmap, it *hiter)
func reflect_maplen(h *hmap) int
func reflectlite_maplen(h *hmap) int
data unsafe.Pointer
tab *itab
func assertE2I2(inter *interfacetype, e eface) (r iface)
func assertI2I2(inter *interfacetype, i iface) (r iface)
func assertI2I2(inter *interfacetype, i iface) (r iface)
func printiface(i iface)
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface)
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
An initTask represents the set of initializations that need to be done for a package.
Keep in sync with ../../test/noinit.go:initTask
nfns uint32
// 0 = uninitialized, 1 = in progress, 2 = done
func plugin_lastmoduleinit() (path string, syms map[string]any, initTasks []*initTask, errstr string)
func doInit(ts []*initTask)
func doInit1(t *initTask)
inlinedCall is the encoding of entries in the FUNCDATA_InlTree table.
// type of the called function
// offset into pclntab for name of called function
// position of an instruction whose source position is the call site (offset from entry)
// line number of start of function (func keyword/TEXT directive)
An inlineFrame is a position in an inlineUnwinder.
index is the index of the current record in inlTree, or -1 if we are in
the outermost function.
pc is the PC giving the file/line metadata of the current frame. This is
always a "call PC" (not a "return PC"). This is 0 when the iterator is
exhausted.
( inlineFrame) valid() bool
func newInlineUnwinder(f funcInfo, pc uintptr, cache *pcvalueCache) (inlineUnwinder, inlineFrame)
An inlineUnwinder iterates over the stack of inlined calls at a PC by
decoding the inline table. The last step of iteration is always the frame of
the physical function, so there's always at least one frame.
This is typically used as:
for u, uf := newInlineUnwinder(...); uf.valid(); uf = u.next(uf) { ... }
Implementation note: This is used in contexts that disallow write barriers.
Hence, the constructor returns this by value and pointer receiver methods
must not mutate pointer fields. Also, we keep the mutable state in a separate
struct mostly to keep both structs SSA-able, which generates much better
code.
cache *pcvalueCache
f funcInfo
inlTree *[1048576]inlinedCall
fileLine returns the file name and line number of the call within the given
frame. As a convenience, for the innermost frame, it returns the file and
line of the PC this unwinder was started at (often this is a call to another
physical function).
It returns "?", 0 if something goes wrong.
isInlined returns whether uf is an inlined frame.
next returns the frame representing uf's logical caller.
(*inlineUnwinder) resolveInternal(pc uintptr) inlineFrame
srcFunc returns the srcFunc representing the given frame.
func newInlineUnwinder(f funcInfo, pc uintptr, cache *pcvalueCache) (inlineUnwinder, inlineFrame)
type interfacetype = abi.InterfaceType (struct)
layout of Itab known to compilers
allocated in non-garbage-collected memory
Needs to be in sync with
../cmd/compile/internal/reflectdata/reflect.go:/^func.WriteTabs.
_type *_type
// variable sized. fun[0]==0 means _type does not implement inter.
// copy of _type.hash. Used for type switches.
inter *interfacetype
init fills in the m.fun array with all the code pointers for
the m.inter/m._type pair. If the type does not implement the interface,
it sets m.fun[0] to 0 and returns the name of an interface function that is missing.
It is ok to call this multiple times on the same m, even concurrently.
func assertE2I(inter *interfacetype, t *_type) *itab
func assertI2I(inter *interfacetype, tab *itab) *itab
func convI2I(dst *interfacetype, src *itab) *itab
func getitab(inter *interfacetype, typ *_type, canfail bool) *itab
func assertI2I(inter *interfacetype, tab *itab) *itab
func convI2I(dst *interfacetype, src *itab) *itab
func ifaceeq(tab *itab, x, y unsafe.Pointer) bool
func itab_callback(tab *itab)
func itabAdd(m *itab)
func panicdottypeI(have *itab, want, iface *_type)
Note: change the formula in the mallocgc call in itabAdd if you change these fields.
// current number of filled entries.
// really [size] large
// length of entries array. Always a power of 2.
add adds the given itab to itab table t.
itabLock must be held.
find finds the given interface/type pair in t.
Returns nil if the given interface/type pair isn't present.
var itabTable *itabTableType
var itabTableInit
it_interval timespec
it_value timespec
func timer_settime(timerid int32, flags int32, new, old *itimerspec) int32
Lock-free stack node.
Also known to export_test.go.
next uint64
pushcnt uintptr
func lfstackUnpack(val uint64) *lfnode
func lfnodeValidate(node *lfnode)
func lfstackPack(node *lfnode, cnt uintptr) uint64
lfstack is the head of a lock-free stack.
The zero value of lfstack is an empty list.
This stack is intrusive. Nodes must embed lfnode as the first field.
The stack does not keep GC-visible pointers to nodes, so the caller
must ensure the nodes are allocated outside the Go heap.
(*lfstack) empty() bool
(*lfstack) pop() unsafe.Pointer
(*lfstack) push(node *lfnode)
var gcBgMarkWorkerPool
limiterEvent represents tracking state for an event tracked by the GC CPU limiter.
// Stores a limiterEventStamp.
consume acquires the partial event CPU time from any in-flight event.
It achieves this by storing the current time as the new event time.
Returns the type of the in-flight event, as well as how long it's currently been
executing for. Returns limiterEventNone if no event is active.
start begins tracking a new limiter event of the current type. If an event
is already in flight, then a new event cannot begin because the current time is
already being attributed to that event. In this case, this function returns false.
Otherwise, it returns true.
The caller must be non-preemptible until at least stop is called or this function
returns false. Because this is trying to measure "on-CPU" time of some event, getting
scheduled away during it can mean that whatever we're measuring isn't a reflection
of "on-CPU" time. The OS could deschedule us at any time, but we want to maintain as
close of an approximation as we can.
stop stops the active limiter event. Throws if the
The caller must be non-preemptible across the event. See start as to why.
limiterEventStamp is a nanotime timestamp packed with a limiterEventType.
duration computes the difference between now and the start time stored in the stamp.
Returns 0 if the difference is negative, which may happen if now is stale or if the
before and after timestamps cross a 2^(64-limiterEventBits) boundary.
type extracts the event type from the stamp.
func makeLimiterEventStamp(typ limiterEventType, now int64) limiterEventStamp
const limiterEventStampNone
limiterEventType indicates the type of an event occurring on some P.
These events represent the full set of events that the GC CPU limiter tracks
to execute its function.
This type may use no more than limiterEventBits bits of information.
func makeLimiterEventStamp(typ limiterEventType, now int64) limiterEventStamp
const limiterEventIdle
const limiterEventIdleMarkWork
const limiterEventMarkAssist
const limiterEventNone
const limiterEventScavengeAssist
linearAlloc is a simple linear allocator that pre-reserves a region
of memory and then optionally maps that region into the Ready state
as needed.
The caller is responsible for locking.
// end of reserved space
// transition memory from Reserved to Ready if true
// one byte past end of mapped space
// next free byte
(*linearAlloc) alloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
(*linearAlloc) init(base, size uintptr, mapMemory bool)
// Must represent a user arena chunk.
allocBits and gcmarkBits hold pointers to a span's mark and
allocation bits. The pointers are 8 byte aligned.
There are three arenas where this data is held.
free: Dirty arenas that are no longer accessed
and can be reused.
next: Holds information to be used in the next GC cycle.
current: Information being used during this GC cycle.
previous: Information being used during the last GC cycle.
A new GC cycle starts with the call to finishsweep_m.
finishsweep_m moves the previous arena to the free arena,
the current arena to the previous arena, and
the next arena to the current arena.
The next arena is populated as the spans request
memory to hold gcmarkBits for the next GC cycle as well
as allocBits for newly allocated spans.
The pointer arithmetic is done "by hand" instead of using
arrays to avoid bounds checks along critical performance
paths.
The sweep will free the old allocBits and set allocBits to the
gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
out memory.
Cache of the allocBits at freeindex. allocCache is shifted
such that the lowest bit corresponds to the bit freeindex.
allocCache holds the complement of allocBits, thus allowing
ctz (count trailing zero) to use it directly.
allocCache may contain bits beyond s.nelems; the caller must ignore
these.
// number of allocated objects
// a copy of allocCount that is stored just before this span is cached
// for divide by elemsize
// computed from sizeclass or from npages
freeIndexForScan is like freeindex, except that freeindex is
used by the allocator whereas freeIndexForScan is used by the
GC scanner. They are two fields so that the GC sees the object
is allocated only when the object and the heap bits are
initialized (see also the assignment of freeIndexForScan in
mallocgc, and issue 54596).
freeindex is the slot index between 0 and nelems at which to begin scanning
for the next free object in this span.
Each allocation scans allocBits starting at freeindex until it encounters a 0
indicating a free object. freeindex is then adjusted so that subsequent scans begin
just past the newly discovered free object.
If freeindex == nelem, this span has no free objects.
allocBits is a bitmap of objects in this span.
If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
then object n is free;
otherwise, object n is allocated. Bits starting at nelem are
undefined and should never be referenced.
Object n starts at address n*elemsize + (start << pageShift).
mspan.gcmarkBits *gcBits
// whether or not this span represents a user arena
// end of data in span
// For debugging. TODO: Remove.
// list of free objects in mSpanManual spans
// needs to be zeroed before allocation
TODO: Look up nelems from sizeclass and remove this field if it
helps performance.
// number of object in the span.
// next span in list, or nil if none
// number of pages in span
// bitmap for pinned objects; accessed atomically
// previous span in list, or nil if none
// size class and noscan (uint8)
// guards specials list and changes to pinnerBits
// linked list of special records sorted by offset.
// address of first byte of span aka s.base()
// mSpanInUse etc; accessed atomically (get/set methods)
mspan.sweepgen uint32
// interval for managing chunk allocation
Reference to mspan.base() to keep the chunk alive.
( liveUserArenaChunk) allocBitsForIndex(allocBitIndex uintptr) markBits
( liveUserArenaChunk) base() uintptr
countAlloc returns the number of objects allocated in span s by
scanning the allocation bitmap.
decPinCounter decreases the counter. If the counter reaches 0, the counter
special is deleted and false is returned. Otherwise true is returned.
divideByElemSize returns n/s.elemsize.
n must be within [0, s.npages*_PageSize),
or may be exactly s.npages*_PageSize
if s.elemsize is from sizeclasses.go.
nosplit, because it is called by objIndex, which is nosplit
Returns only when span s has been swept.
nosplit, because it's called by isPinned, which is nosplit
( liveUserArenaChunk) inList() bool
incPinCounter is only called for multiple pins of the same object and records
the _additional_ pins.
Initialize a new span with the given start and npages.
initHeapBits initializes the heap bitmap for a span.
If this is a span of single pointer allocations, it initializes all
words to pointer. If force is true, clears all bits.
isFree reports whether the index'th object in s is unallocated.
The caller must ensure s.state is mSpanInUse, and there must have
been no preemption points since ensuring this (which could allow a
GC transition, which would allow the state to change).
isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
and doesn't contain any scannable memory anymore. However, it might still be
mSpanInUse as it sits on the quarantine list, since it needs to be swept.
This is not safe to execute unless the caller has ownership of the mspan or
the world is stopped (preemption is prevented while the relevant state changes).
This is really only meant to be used by accounting tests in the runtime to
distinguish when a span shouldn't be counted (since mSpanInUse might not be
enough).
( liveUserArenaChunk) layout() (size, n, total uintptr)
( liveUserArenaChunk) markBitsForBase() markBits
( liveUserArenaChunk) markBitsForIndex(objIndex uintptr) markBits
newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
span's pinner bits. newPinneBits is used to mark objects that are pinned.
They are copied when the span is swept.
nextFreeIndex returns the index of the next free object in s at
or after s.freeindex.
There are hardware instructions that can be used to make this
faster if profiling warrants it.
nosplit, because it is called by other nosplit code like findObject
( liveUserArenaChunk) pinnerBitSize() uintptr
refillAllocCache takes 8 bytes s.allocBits starting at whichByte
and negates them so that ctz (count trailing zeros) instructions
can be used. It then places these 8 bytes into the cached 64 bit
s.allocCache.
refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
next GC cycle. If it does not contain any pinned objects, pinnerBits of the
span is set to nil.
reportZombies reports any marked but free objects in s and throws.
This generally means one of the following:
1. User code converted a pointer to a uintptr and then back
unsafely, and a GC ran while the uintptr was the only reference to
an object.
2. User code (or a compiler bug) constructed a bad pointer that
points to a free slot, often a past-the-end pointer.
3. The GC two cycles ago missed a pointer and freed a live object,
but it was still live in the last cycle, so this GC cycle found a
pointer to that object and marked it.
( liveUserArenaChunk) setPinnerBits(p *pinnerBits)
setUserArenaChunkToFault sets the address space for the user arena chunk to fault
and releases any underlying memory resources.
Must be in a non-preemptible state to ensure the consistency of statistics
exported to MemStats.
Find a splice point in the sorted list and check for an already existing
record. Returns a pointer to the next-reference in the list predecessor.
Returns true, if the referenced item is an exact match.
userArenaNextFree reserves space in the user arena for an item of the specified
type. If cap is not -1, this is for an array of cap elements of type t.
( lockRank) String() string
lockRank : fmt.Stringer
lockRank : stringer
lockRank : context.stringer
func getLockRank(l *mutex) lockRank
func acquireLockRank(rank lockRank)
func assertRankHeld(r lockRank)
func lockInit(l *mutex, rank lockRank)
func lockWithRank(l *mutex, rank lockRank)
func lockWithRankMayAcquire(l *mutex, rank lockRank)
func releaseLockRank(rank lockRank)
const lockRankAllg
const lockRankAllp
const lockRankAssistQueue
const lockRankCpuprof
const lockRankDeadlock
const lockRankDefer
const lockRankFin
const lockRankForcegc
const lockRankGcBitsArenas
const lockRankGlobalAlloc
const lockRankGscan
const lockRankHchan
const lockRankHchanLeaf
const lockRankItab
const lockRankLeafRank
const lockRankMheap
const lockRankMheapSpecial
const lockRankMspanSpecial
const lockRankNetpollInit
const lockRankNotifyList
const lockRankPanic
const lockRankPollDesc
const lockRankProfBlock
const lockRankProfInsert
const lockRankProfMemActive
const lockRankProfMemFuture
const lockRankRaceFini
const lockRankReflectOffs
const lockRankRoot
const lockRankRwmutexR
const lockRankRwmutexW
const lockRankScavenge
const lockRankSched
const lockRankSpanSetSpine
const lockRankStackLarge
const lockRankStackpool
const lockRankSudog
const lockRankSweep
const lockRankSweepWaiters
const lockRankSysmon
const lockRankTimers
const lockRankTrace
const lockRankTraceBuf
const lockRankTraceStackTab
const lockRankTraceStrings
const lockRankUnknown
const lockRankUserArenaState
const lockRankWbufSpans
// lockRankStruct is embedded in mutex, but is empty when staticklockranking is
disabled (the default)
// on allm
// m is blocked on a note
// goroutine running during fatal signal
// cgo traceback if crashing in cgo call
// if non-zero, cgoCallers in use temporarily
// stack that created this thread.
// current running goroutine
// div/mod denominator for arm - known to liblink
dlogPerM dlogPerM
dying int32
fastrand uint64
// Whether it is safe to free g0 and delete m (one of freeMRef, freeMStack, freeMWait)
// on sched.freem
// goroutine with scheduling stack
// Go-allocated signal handling stack
// signal-handling g
id int64
// m is executing a cgo call
// m is an extra m that is not executing Go code
// m is an extra m
these are here because they are too large to be on the stack
of low-level NOSPLIT functions.
libcallg guintptr
// for cpu profiler
libcallsp uintptr
// tracking for external LockOSThread
// tracking for internal lockOSThread
lockedg guintptr
locks int32
locksHeld [10]heldLockInfo
Up to 10 locks held by this m, maintained by the lock ranking code.
mOS mOS
mallocing int32
// gobuf arg to morestack
needPerThreadSyscall indicates that a per-thread syscall is required
for doAllThreadsSyscall.
profileTimer holds the ID of the POSIX interval timer for profiling CPU
usage on this thread.
It is valid when the profileTimerValid field is true. A thread
creates and manages its own timer, and these fields are read and written
only by this thread. But because some of the reads on profileTimerValid
are in signal handling code, this field should be atomic type.
mOS.profileTimerValid atomic.Bool
mstartfn func()
// number of cgo calls currently in progress
// number of cgo calls in total
needextram bool
// minit on C thread called sigaltstack
nextp puintptr
// next m waiting for lock
// the p that was attached before executing a syscall
// attached p for executing go code (nil if not executing go code)
park note
preemptGen counts the number of completed preemption
signals. This is used to detect when a preemption is
requested, but fails.
// if != "", keep curg running on this m
printlock int8
Fields not known to debuggers.
// for debuggers, but offset not hard-coded
profilehz int32
schedlink muintptr
// storage for saved signal mask
Whether this is a pending preemption signal on this M.
// m is out of work and is actively looking for work
// stores syscall parameters on windows
syscalltick uint32
throwing throwType
// thread-local storage (for x86 extern register)
trace mTraceState
traceback uint8
// PC for traceback while in VDSO call
// SP for traceback while in VDSO call (0 if not in call)
waitTraceBlockReason traceBlockReason
waitTraceSkip int
waitlock unsafe.Pointer
wait* are used to carry arguments from gopark into park_m, because
there's no stack to put them on. That is their sole purpose.
(*m) becomeSpinning()
(*m) hasCgoOnStack() bool
func acquirem() *m
func allocm(pp *p, fn func(), id int64) *m
func getExtraM() (mp *m, last bool)
func lockextra(nilokay bool) *m
func mget() *m
func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)
func addExtraM(mp *m)
func adjustSignalStack(sig uint32, mp *m, gsigStack *gsignalStack) bool
func canPreemptM(mp *m) bool
func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
func getMCache(mp *m) *mcache
func mcommoninit(mp *m, id int64)
func mdestroy(mp *m)
func mpreinit(mp *m)
func mput(mp *m)
func newm1(mp *m)
func newosproc(mp *m)
func osPreemptExtEnter(mp *m)
func osPreemptExtExit(mp *m)
func osSetupTLS(mp *m)
func preemptM(mp *m)
func profilealloc(mp *m, x unsafe.Pointer, size uintptr)
func putExtraM(mp *m)
func releasem(mp *m)
func setMNoWB(mp **m, new *m)
func setMNoWB(mp **m, new *m)
func signalM(mp *m, sig int)
func sigNotOnStack(sig uint32, sp uintptr, mp *m)
func sigprof(pc, sp, lr uintptr, gp *g, mp *m)
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, stackID uint32, skip int, args ...uint64)
func traceReleaseBuffer(mp *m, pid int32)
func traceStackID(mp *m, pcBuf []uintptr, skip int) uint64
func unlockextra(mp *m, delta int32)
func validSIGPROF(mp *m, c *sigctxt) bool
var allm *m
var m0
mapextra holds fields that are not present on all maps.
nextOverflow holds a pointer to a free overflow bucket.
oldoverflow *[]*bmap
If both key and elem do not contain pointers and are inline, then we mark bucket
type as containing no pointers. This avoids scanning such maps.
However, bmap.overflow is a pointer. In order to keep overflow buckets
alive, we store pointers to all overflow buckets in hmap.extra.overflow and hmap.extra.oldoverflow.
overflow and oldoverflow are only used if key and elem do not contain pointers.
overflow contains overflow buckets for hmap.buckets.
oldoverflow contains overflow buckets for hmap.oldbuckets.
The indirection allows to store a pointer to the slice in hiter.
markBits provides access to the mark bit for an object in the heap.
bytep points to the byte holding the mark bit.
mask is a byte with a single bit set that can be &ed with *bytep
to see if the bit has been set.
*m.byte&m.mask != 0 indicates the mark bit is set.
index can be used along with span information to generate
the address of the object in the heap.
We maintain one set of mark bits for allocation and one for
marking purposes.
bytep *uint8
index uintptr
mask uint8
advance advances the markBits to the next object in the span.
clearMarked clears the marked bit in the markbits, atomically.
isMarked reports whether mark bit m is set.
setMarked sets the marked bit in the markbits, atomically.
setMarkedNonAtomic sets the marked bit in the markbits, non-atomically.
func markBitsForAddr(p uintptr) markBits
func markBitsForSpan(base uintptr) (mbits markBits)
func setCheckmark(obj, base, off uintptr, mbits markBits) bool
Per-thread (in Go, per-P) cache for small objects.
This includes a small object cache and local allocation stats.
No locking needed because it is per-thread (per-P).
mcaches are allocated from non-GC'd memory, so any heap pointers
must be specially handled.
// spans to allocate from, indexed by spanClass
flushGen indicates the sweepgen during which this mcache
was last flushed. If flushGen != mheap_.sweepgen, the spans
in this mcache are stale and need to the flushed so they
can be swept. This is done in acquirep.
The following members are accessed on every malloc,
so they are grouped here for better caching.
// trigger heap sample after allocating this many bytes
// bytes of scannable heap allocated
stackcache [4]stackfreelist
tiny points to the beginning of the current tiny block, or
nil if there is no current tiny block.
tiny is a heap pointer. Since mcache is in non-GC'd memory,
we handle it by clearing it in releaseAll during mark
termination.
tinyAllocs is the number of tiny allocations performed
by the P that owns this mcache.
tinyAllocs uintptr
tinyoffset uintptr
allocLarge allocates a span for a large object.
nextFree returns the next free object from the cached span if one is available.
Otherwise it refills the cache with a span with an available object and
returns that object along with a flag indicating that this was a heavy
weight allocation. If it is a heavy weight allocation the caller must
determine whether a new GC cycle needs to be started or if the GC is active
whether this goroutine needs to assist the GC.
Must run in a non-preemptible context since otherwise the owner of
c could change.
prepareForSweep flushes c if the system has entered a new sweep phase
since c was populated. This must happen between the sweep phase
starting and the first allocation from c.
refill acquires a new span of span class spc for c. This span will
have at least one free object. The current span in c must be full.
Must run in a non-preemptible context since otherwise the owner of
c could change.
(*mcache) releaseAll()
func allocmcache() *mcache
func getMCache(mp *m) *mcache
func freemcache(c *mcache)
func stackcache_clear(c *mcache)
func stackcacherefill(c *mcache, order uint8)
func stackcacherelease(c *mcache, order uint8)
var mcache0 *mcache
Central list of free objects of a given size.
// list of spans with no free objects
partial and full contain two mspan sets: one of swept in-use
spans, and one of unswept in-use spans. These two trade
roles on each GC cycle. The unswept set is drained either by
allocation or by the background sweeper in every GC cycle,
so only two roles are necessary.
sweepgen is increased by 2 on each GC cycle, so the swept
spans are in partial[sweepgen/2%2] and the unswept spans are in
partial[1-sweepgen/2%2]. Sweeping pops spans from the
unswept set and pushes spans that are still in-use on the
swept set. Likewise, allocating an in-use span pushes it
on the swept set.
Some parts of the sweeper can sweep arbitrary spans, and hence
can't remove them from the unswept set, but will add the span
to the appropriate swept list. As a result, the parts of the
sweeper and mcentral that do consume from the unswept list may
encounter swept spans, and these should be ignored.
// list of spans with a free object
spanclass spanClass
Allocate a span to use in an mcache.
fullSwept returns the spanSet which holds swept spans without any
free slots for this sweepgen.
fullUnswept returns the spanSet which holds unswept spans without any
free slots for this sweepgen.
grow allocates a new empty span from the heap and initializes it for c's size class.
Initialize a single central free list.
partialSwept returns the spanSet which holds partially-filled
swept spans for this sweepgen.
partialUnswept returns the spanSet which holds partially-filled
unswept spans for this sweepgen.
Return span from an mcache.
s must have a span class corresponding to this
mcentral and it must not be empty.
A memRecord is the bucket data for a bucket of type memProfile,
part of the memory profile.
active is the currently published profile. A profiling
cycle can be accumulated into active once its complete.
future records the profile events we're counting for cycles
that have not yet been published. This is ring buffer
indexed by the global heap profile cycle C and stores
cycles C, C+1, and C+2. Unlike active, these counts are
only for a single cycle; they are not cumulative across
cycles.
We store cycle C here because there's a window between when
C becomes the active cycle and when we've flushed it to
active.
memRecordCycle
alloc_bytes uintptr
allocs uintptr
free_bytes uintptr
frees uintptr
add accumulates b into a. It does not zero b.
compute is a function that populates a metricValue
given a populated statAggregate structure.
deps is the set of runtime statistics that this metric
depends on. Before compute is called, the statAggregate
which will be passed must ensure() these dependencies.
metricFloat64Histogram is a runtime copy of runtime/metrics.Float64Histogram
and must be kept structurally identical to that type.
buckets []float64
counts []uint64
metricKind is a runtime copy of runtime/metrics.ValueKind and
must be kept structurally identical to that type.
const metricKindBad
const metricKindFloat64
const metricKindFloat64Histogram
const metricKindUint64
( metricReader) compute(_ *statAggregate, out *metricValue)
metricSample is a runtime copy of runtime/metrics.Sample and
must be kept structurally identical to that type.
name string
value metricValue
metricValue is a runtime copy of runtime/metrics.Sample and
must be kept structurally identical to that type.
kind metricKind
// contains non-scalar values.
// contains scalar values for scalar Kinds.
float64HistOrInit tries to pull out an existing float64Histogram
from the value, but if none exists, then it allocates one with
the given buckets.
func compute0(_ *statAggregate, out *metricValue)
Main malloc heap.
The heap itself is the "free" and "scav" treaps,
but all the other global data is here too.
mheap must not be heap-allocated because it contains mSpanLists,
which must not be heap-allocated.
allArenas is the arenaIndex of every mapped arena. This can
be used to iterate through the address space.
Access is protected by mheap_.lock. However, since this is
append-only and old backing arrays are never freed, it is
safe to acquire mheap_.lock, copy the slice header, and
then release mheap_.lock.
allspans is a slice of all mspans ever created. Each mspan
appears exactly once.
The memory for allspans is manually managed and can be
reallocated and move as the heap grows.
In general, allspans is protected by mheap_.lock, which
prevents concurrent access as well as freeing the backing
store. Accesses during STW might not hold the lock, but
must ensure that allocation cannot happen around the
access (since that may free the backing store).
// all spans out there
arena is a pre-reserved space for allocating heap arenas
(the actual arenas). This is only used on 32-bit.
// allocator for arenaHints
arenaHints is a list of addresses at which to attempt to
add more heap arenas. This is initially populated with a
set of general hint addresses, and grown with the bounds of
actual heap arena ranges.
arenas is the heap arena map. It points to the metadata for
the heap for every arena frame of the entire usable virtual
address space.
Use arenaIndex to compute indexes into this array.
For regions of the address space that are not backed by the
Go heap, the arena map contains nil.
Modifications are protected by mheap_.lock. Reads can be
performed without locking; however, a given entry can
transition from nil to non-nil at any time when the lock
isn't held. (Entries never transitions back to nil.)
In general, this is a two-level mapping consisting of an L1
map and possibly many L2 maps. This saves space when there
are a huge number of arena frames. However, on many
platforms (even 64-bit), arenaL1Bits is 0, making this
effectively a single-level map. In this case, arenas[0]
will never be nil.
arenasHugePages indicates whether arenas' L2 entries are eligible
to be backed by huge pages.
// allocator for mcache*
central free lists for small size classes.
the padding makes sure that the mcentrals are
spaced CacheLinePadSize bytes apart, so that each mcentral.lock
gets its own cache line.
central is indexed by spanClass.
curArena is the arena that the heap is currently growing
into. This should always be physPageSize-aligned.
heapArenaAlloc is pre-reserved space for allocating heapArena
objects. This is only used on 32-bit, where we pre-reserve
this space to avoid interleaving it with the heap itself.
lock must only be acquired on the system stack, otherwise a g
could self-deadlock if its stack grows with the lock held.
markArenas is a snapshot of allArenas taken at the beginning
of the mark cycle. Because allArenas is append-only, neither
this slice nor its contents will change during the mark, so
it can be read safely.
// page allocation data structure
Proportional sweep
These parameters represent a linear function from gcController.heapLive
to page sweep count. The proportional sweep system works to
stay in the black by keeping the current page sweep count
above this line at the current gcController.heapLive.
The line has slope sweepPagesPerByte and passes through a
basis point at (sweepHeapLiveBasis, pagesSweptBasis). At
any given time, the system is at (gcController.heapLive,
pagesSwept) in this space.
It is important that the line pass through a point we
control rather than simply starting at a 0,0 origin
because that lets us adjust sweep pacing at any time while
accounting for current progress. If we could only adjust
the slope, it would create a discontinuity in debt if any
progress has already been made.
// pages of spans in stats mSpanInUse
// pages swept this cycle
// pagesSwept to use as the origin of the sweep ratio
reclaimCredit is spare credit for extra pages swept. Since
the page reclaimer works in large chunks, it may reclaim
more than requested. Any spare pages released go to this
credit pool.
reclaimIndex is the page index in allArenas of next page to
reclaim. Specifically, it refers to page (i %
pagesPerArena) of arena allArenas[i / pagesPerArena].
If this is >= 1<<63, the page reclaimer is done scanning
the page marks.
// allocator for span*
// allocator for specialPinCounter
// allocator for specialReachable
// allocator for specialfinalizer*
// lock for special record allocators.
// allocator for specialprofile*
sweepArenas is a snapshot of allArenas taken at the
beginning of the sweep cycle. This can be read safely by
simply blocking GC (by disabling preemption).
// value of gcController.heapLive to use as the origin of sweep ratio; written with lock, read without
// proportional sweep ratio; written with lock, read without
// sweep generation, see comment in mspan; written during STW
// never set, just here to force the specialfinalizer type into DWARF
User arena state.
Protected by mheap_.lock.
alloc allocates a new span of npage pages from the GC'd heap.
spanclass indicates the span's size class and scannability.
Returns a span that has been fully initialized. span.needzero indicates
whether the span has been zeroed. Note that it may not be.
allocMSpanLocked allocates an mspan object.
h.lock must be held.
allocMSpanLocked must be called on the system stack because
its caller holds the heap lock. See mheap for details.
Running on the system stack also ensures that we won't
switch Ps during this function. See tryAllocMSpan for details.
allocManual allocates a manually-managed span of npage pages.
allocManual returns nil if allocation fails.
allocManual adds the bytes used to *stat, which should be a
memstats in-use field. Unlike allocations in the GC'd heap, the
allocation does *not* count toward heapInUse.
The memory backing the returned span may not be zeroed if
span.needzero is set.
allocManual must be called on the system stack because it may
acquire the heap lock via allocSpan. See mheap for details.
If new code is written to call allocManual, do NOT use an
existing spanAllocType value and instead declare a new one.
allocNeedsZero checks if the region of address space [base, base+npage*pageSize),
assumed to be allocated, needs to be zeroed, updating heap arena metadata for
future allocations.
This must be called each time pages are allocated from the heap, even if the page
allocator can otherwise prove the memory it's allocating is already zero because
they're fresh from the operating system. It updates heapArena metadata that is
critical for future page allocations.
There are no locking constraints on this method.
allocSpan allocates an mspan which owns npages worth of memory.
If typ.manual() == false, allocSpan allocates a heap span of class spanclass
and updates heap accounting. If manual == true, allocSpan allocates a
manually-managed span (spanclass is ignored), and the caller is
responsible for any accounting related to its use of the span. Either
way, allocSpan will atomically add the bytes in the newly allocated
span to *sysStat.
The returned span is fully initialized.
h.lock must not be held.
allocSpan must be called on the system stack both because it acquires
the heap lock and because it must block GC transitions.
allocUserArenaChunk attempts to reuse a free user arena chunk represented
as a span.
Must be in a non-preemptible state to ensure the consistency of statistics
exported to MemStats.
Acquires the heap lock. Must run on the system stack for that reason.
enableMetadataHugePages enables huge pages for various sources of heap metadata.
A note on latency: for sufficiently small heaps (<10s of GiB) this function will take constant
time, but may take time proportional to the size of the mapped heap beyond that.
This function is idempotent.
The heap lock must not be held over this operation, since it will briefly acquire
the heap lock.
Must be called on the system stack because it acquires the heap lock.
freeMSpanLocked free an mspan object.
h.lock must be held.
freeMSpanLocked must be called on the system stack because
its caller holds the heap lock. See mheap for details.
Running on the system stack also ensures that we won't
switch Ps during this function. See tryAllocMSpan for details.
freeManual frees a manually-managed span returned by allocManual.
typ must be the same as the spanAllocType passed to the allocManual that
allocated s.
This must only be called when gcphase == _GCoff. See mSpanState for
an explanation.
freeManual must be called on the system stack because it acquires
the heap lock. See mheap for details.
Free the span back into the heap.
(*mheap) freeSpanLocked(s *mspan, typ spanAllocType)
Try to add at least npage pages of memory to the heap,
returning how much the heap grew by and whether it worked.
h.lock must be held.
Initialize the heap.
initSpan initializes a blank span s which will represent the range
[base, base+npages*pageSize). typ is the type of span being allocated.
nextSpanForSweep finds and pops the next span for sweeping from the
central sweep buffers. It returns ownership of the span to the caller.
Returns nil if no such span exists.
reclaim sweeps and reclaims at least npage pages into the heap.
It is called before allocating npage pages to keep growth in check.
reclaim implements the page-reclaimer half of the sweeper.
h.lock must NOT be held.
reclaimChunk sweeps unmarked spans that start at page indexes [pageIdx, pageIdx+n).
It returns the number of pages returned to the heap.
h.lock must be held and the caller must be non-preemptible. Note: h.lock may be
temporarily unlocked and re-locked in order to do sweeping or if tracing is
enabled.
scavengeAll acquires the heap lock (blocking any additional
manipulation of the page allocator) and iterates over the whole
heap, scavenging every free page available.
Must run on the system stack because it acquires the heap lock.
setSpans modifies the span map so [spanOf(base), spanOf(base+npage*pageSize))
is s.
sysAlloc allocates heap arena space for at least n bytes. The
returned pointer is always heapArenaBytes-aligned and backed by
h.arenas metadata. The returned size is always a multiple of
heapArenaBytes. sysAlloc returns nil on failure.
There is no corresponding free function.
hintList is a list of hint addresses for where to allocate new
heap arenas. It must be non-nil.
register indicates whether the heap arena should be registered
in allArenas.
sysAlloc returns a memory region in the Reserved state. This region must
be transitioned to Prepared and then Ready before use.
h must be locked.
tryAllocMSpan attempts to allocate an mspan object from
the P-local cache, but may fail.
h.lock need not be held.
This caller must ensure that its P won't change underneath
it during this function. Currently to ensure that we enforce
that the function is run on the system stack, because that's
the only place it is used now. In the future, this requirement
may be relaxed if its use is necessary elsewhere.
var mheap_
A generic linked list of blocks. (Typically the block is bigger than sizeof(MLink).)
Since assignments to mlink.next will result in a write barrier being performed
this cannot be used by some of the internal GC structures. For example when
the sweeper is placing an unmarked object on the free list it does not want the
write barrier to be called since that could result in the object being reachable.
next *mlink
moduledata records information about the layout of the executable
image. It is written by the linker. Any changes here must be
matched changes to the code in cmd/link/internal/ld/symtab.go:symtab.
moduledata is stored in statically allocated non-pointer memory;
none of the pointers here are visible to the garbage collector.
// Only in static data
// module failed to load and should be ignored
bss uintptr
covctrs uintptr
cutab []uint32
data uintptr
ebss uintptr
ecovctrs uintptr
edata uintptr
end uintptr
enoptrbss uintptr
enoptrdata uintptr
etext uintptr
etypes uintptr
filetab []byte
findfunctab uintptr
ftab []functab
funcnametab []byte
gcbss uintptr
gcbssmask bitvector
gcdata uintptr
gcdatamask bitvector
// go.func.*
// 1 if module contains the main function, 0 otherwise
This slice records the initializing tasks that need to be
done to start up the program. It is built by the linker.
itablinks []*itab
maxpc uintptr
minpc uintptr
modulehashes []modulehash
modulename string
next *moduledata
noptrbss uintptr
noptrdata uintptr
pcHeader *pcHeader
pclntable []byte
pctab []byte
pkghashes []modulehash
pluginpath string
ptab []ptabEntry
rodata uintptr
text uintptr
textsectmap []textsect
// offsets from types
// offset to *_rtype in previous module
types uintptr
funcName returns the string at nameOff in the function name table.
textAddr returns md.text + off, with special handling for multiple text sections.
off is a (virtual) offset computed at internal linking time,
before the external linker adjusts the sections' base addresses.
The text, or instruction stream is generated as one large buffer.
The off (offset) for a function is its offset within this buffer.
If the total text size gets too large, there can be issues on platforms like ppc64
if the target of calls are too far for the call instruction.
To resolve the large text issue, the text is split into multiple text sections
to allow the linker to generate long calls when necessary.
When this happens, the vaddr for each text section is set to its offset within the text.
Each function's offset is compared against the section vaddrs and ends to determine the containing section.
Then the section relative offset is added to the section's
relocated baseaddr to compute the function address.
It is nosplit because it is part of the findfunc implementation.
textOff is the opposite of textAddr. It converts a PC to a (virtual) offset
to md.text, and returns if the PC is in any Go text section.
It is nosplit because it is part of the findfunc implementation.
func activeModules() []*moduledata
func findmoduledatap(pc uintptr) *moduledata
func moduledataverify1(datap *moduledata)
func pluginftabverify(md *moduledata)
var firstmoduledata
var lastmoduledatap *moduledata
A modulehash is used to compare the ABI of a new module or a
package in a new module with the loaded program.
For each shared library a module links against, the linker creates an entry in the
moduledata.modulehashes slice containing the name of the module, the abi hash seen
at link time and a pointer to the runtime abi hash. These are checked in
moduledataverify1 below.
For each loaded plugin, the pkghashes slice has a modulehash of the
newly loaded package that can be used to check the plugin's version of
a package against any previously loaded version of the package.
This is done in plugin.lastmoduleinit.
linktimehash string
modulename string
runtimehash *string
needPerThreadSyscall indicates that a per-thread syscall is required
for doAllThreadsSyscall.
profileTimer holds the ID of the POSIX interval timer for profiling CPU
usage on this thread.
It is valid when the profileTimerValid field is true. A thread
creates and manages its own timer, and these fields are read and written
only by this thread. But because some of the reads on profileTimerValid
are in signal handling code, this field should be atomic type.
profileTimerValid atomic.Bool
mProfCycleHolder holds the global heap profile cycle number (wrapped at
mProfCycleWrap, stored starting at bit 1), and a flag (stored at bit 0) to
indicate whether future[cycle] in all buckets has been queued to flush into
the active profile.
value atomic.Uint32
increment increases the cycle count by one, wrapping the value at
mProfCycleWrap. It clears the flushed flag.
read returns the current cycle count.
setFlushed sets the flushed flag. It returns the current cycle count and the
previous value of the flushed flag.
var mProfCycle
allocBits and gcmarkBits hold pointers to a span's mark and
allocation bits. The pointers are 8 byte aligned.
There are three arenas where this data is held.
free: Dirty arenas that are no longer accessed
and can be reused.
next: Holds information to be used in the next GC cycle.
current: Information being used during this GC cycle.
previous: Information being used during the last GC cycle.
A new GC cycle starts with the call to finishsweep_m.
finishsweep_m moves the previous arena to the free arena,
the current arena to the previous arena, and
the next arena to the current arena.
The next arena is populated as the spans request
memory to hold gcmarkBits for the next GC cycle as well
as allocBits for newly allocated spans.
The pointer arithmetic is done "by hand" instead of using
arrays to avoid bounds checks along critical performance
paths.
The sweep will free the old allocBits and set allocBits to the
gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
out memory.
Cache of the allocBits at freeindex. allocCache is shifted
such that the lowest bit corresponds to the bit freeindex.
allocCache holds the complement of allocBits, thus allowing
ctz (count trailing zero) to use it directly.
allocCache may contain bits beyond s.nelems; the caller must ignore
these.
// number of allocated objects
// a copy of allocCount that is stored just before this span is cached
// for divide by elemsize
// computed from sizeclass or from npages
freeIndexForScan is like freeindex, except that freeindex is
used by the allocator whereas freeIndexForScan is used by the
GC scanner. They are two fields so that the GC sees the object
is allocated only when the object and the heap bits are
initialized (see also the assignment of freeIndexForScan in
mallocgc, and issue 54596).
freeindex is the slot index between 0 and nelems at which to begin scanning
for the next free object in this span.
Each allocation scans allocBits starting at freeindex until it encounters a 0
indicating a free object. freeindex is then adjusted so that subsequent scans begin
just past the newly discovered free object.
If freeindex == nelem, this span has no free objects.
allocBits is a bitmap of objects in this span.
If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
then object n is free;
otherwise, object n is allocated. Bits starting at nelem are
undefined and should never be referenced.
Object n starts at address n*elemsize + (start << pageShift).
gcmarkBits *gcBits
// whether or not this span represents a user arena
// end of data in span
// For debugging. TODO: Remove.
// list of free objects in mSpanManual spans
// needs to be zeroed before allocation
TODO: Look up nelems from sizeclass and remove this field if it
helps performance.
// number of object in the span.
// next span in list, or nil if none
// number of pages in span
// bitmap for pinned objects; accessed atomically
// previous span in list, or nil if none
// size class and noscan (uint8)
// guards specials list and changes to pinnerBits
// linked list of special records sorted by offset.
// address of first byte of span aka s.base()
// mSpanInUse etc; accessed atomically (get/set methods)
sweepgen uint32
// interval for managing chunk allocation
(*mspan) allocBitsForIndex(allocBitIndex uintptr) markBits
(*mspan) base() uintptr
countAlloc returns the number of objects allocated in span s by
scanning the allocation bitmap.
decPinCounter decreases the counter. If the counter reaches 0, the counter
special is deleted and false is returned. Otherwise true is returned.
divideByElemSize returns n/s.elemsize.
n must be within [0, s.npages*_PageSize),
or may be exactly s.npages*_PageSize
if s.elemsize is from sizeclasses.go.
nosplit, because it is called by objIndex, which is nosplit
Returns only when span s has been swept.
nosplit, because it's called by isPinned, which is nosplit
(*mspan) inList() bool
incPinCounter is only called for multiple pins of the same object and records
the _additional_ pins.
Initialize a new span with the given start and npages.
initHeapBits initializes the heap bitmap for a span.
If this is a span of single pointer allocations, it initializes all
words to pointer. If force is true, clears all bits.
isFree reports whether the index'th object in s is unallocated.
The caller must ensure s.state is mSpanInUse, and there must have
been no preemption points since ensuring this (which could allow a
GC transition, which would allow the state to change).
isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
and doesn't contain any scannable memory anymore. However, it might still be
mSpanInUse as it sits on the quarantine list, since it needs to be swept.
This is not safe to execute unless the caller has ownership of the mspan or
the world is stopped (preemption is prevented while the relevant state changes).
This is really only meant to be used by accounting tests in the runtime to
distinguish when a span shouldn't be counted (since mSpanInUse might not be
enough).
(*mspan) layout() (size, n, total uintptr)
(*mspan) markBitsForBase() markBits
(*mspan) markBitsForIndex(objIndex uintptr) markBits
newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
span's pinner bits. newPinneBits is used to mark objects that are pinned.
They are copied when the span is swept.
nextFreeIndex returns the index of the next free object in s at
or after s.freeindex.
There are hardware instructions that can be used to make this
faster if profiling warrants it.
nosplit, because it is called by other nosplit code like findObject
(*mspan) pinnerBitSize() uintptr
refillAllocCache takes 8 bytes s.allocBits starting at whichByte
and negates them so that ctz (count trailing zeros) instructions
can be used. It then places these 8 bytes into the cached 64 bit
s.allocCache.
refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
next GC cycle. If it does not contain any pinned objects, pinnerBits of the
span is set to nil.
reportZombies reports any marked but free objects in s and throws.
This generally means one of the following:
1. User code converted a pointer to a uintptr and then back
unsafely, and a GC ran while the uintptr was the only reference to
an object.
2. User code (or a compiler bug) constructed a bad pointer that
points to a free slot, often a past-the-end pointer.
3. The GC two cycles ago missed a pointer and freed a live object,
but it was still live in the last cycle, so this GC cycle found a
pointer to that object and marked it.
(*mspan) setPinnerBits(p *pinnerBits)
setUserArenaChunkToFault sets the address space for the user arena chunk to fault
and releases any underlying memory resources.
Must be in a non-preemptible state to ensure the consistency of statistics
exported to MemStats.
Find a splice point in the sorted list and check for an already existing
record. Returns a pointer to the next-reference in the list predecessor.
Returns true, if the referenced item is an exact match.
userArenaNextFree reserves space in the user arena for an item of the specified
type. If cap is not -1, this is for an array of cap elements of type t.
func findObject(p, refBase, refOff uintptr) (base uintptr, s *mspan, objIndex uintptr)
func materializeGCProg(ptrdata uintptr, prog *byte) *mspan
func newUserArenaChunk() (unsafe.Pointer, *mspan)
func spanOf(p uintptr) *mspan
func spanOfHeap(p uintptr) *mspan
func spanOfUnchecked(p uintptr) *mspan
func badPointer(s *mspan, p, refBase, refOff uintptr)
func dematerializeGCProg(s *mspan)
func freeUserArenaChunk(s *mspan, x unsafe.Pointer)
func gcmarknewobject(span *mspan, obj, size uintptr)
func greyobject(obj, base, off uintptr, span *mspan, gcw *gcWork, objIndex uintptr)
func newSpecialsIter(span *mspan) specialsIter
func nextFreeFast(s *mspan) gclinkptr
func osStackAlloc(s *mspan)
func osStackFree(s *mspan)
func spanHasNoSpecials(s *mspan)
func spanHasSpecials(s *mspan)
var emptymspan
mSpanList heads a linked list of spans.
// first span in list, or nil if none
// last span in list, or nil if none
Initialize an empty doubly-linked list.
(*mSpanList) insert(span *mspan)
(*mSpanList) insertBack(span *mspan)
(*mSpanList) isEmpty() bool
(*mSpanList) remove(span *mspan)
takeAll removes all spans from other and inserts them at the front
of list.
An mspan representing actual memory has state mSpanInUse,
mSpanManual, or mSpanFree. Transitions between these states are
constrained as follows:
- A span may transition from free to in-use or manual during any GC
phase.
- During sweeping (gcphase == _GCoff), a span may transition from
in-use to free (as a result of sweeping) or manual to free (as a
result of stacks being freed).
- During GC (gcphase != _GCoff), a span *must not* transition from
manual or in-use to free. Because concurrent GC may read a pointer
and then look up its span, the span state must be monotonic.
Setting mspan.state to mSpanInUse or mSpanManual must be done
atomically and only after all other span fields are valid.
Likewise, if inspecting a span is contingent on it being
mSpanInUse, the state should be loaded atomically and checked
before depending on other fields. This allows the garbage collector
to safely deal with potentially invalid pointers, since resolving
such pointers may race with a span being allocated.
const mSpanDead
const mSpanInUse
const mSpanManual
mSpanStateBox holds an atomic.Uint8 to provide atomic operations on
an mSpanState. This is a separate type to disallow accidental comparison
or assignment with mSpanState.
s atomic.Uint8
(*mSpanStateBox) get() mSpanState
(*mSpanStateBox) set(s mSpanState)
// profiling bucket hash table
enablegc bool
Statistics about GC overhead.
// updated atomically or during STW
gcPauseDist represents the distribution of all GC-related
application pauses in the runtime.
Each individual pause is counted separately, unlike pause_ns.
// fraction of CPU time used by GC
Statistics about malloc heap.
// heapInUse at mark termination of the previous GC
// last gc (monotonic time)
Protected by mheap or stopping the world during GC.
// last gc (in unix time)
mcache_sys sysMemStat
Statistics about allocation of low-level fixed-size structures.
// number of user-forced GCs
numgc uint32
Miscellaneous statistics.
// updated atomically or during STW
// circular buffer of recent gc end times (nanoseconds since 1970)
// circular buffer of recent gc pause lengths
pause_total_ns uint64
Statistics about stacks.
// only counts newosproc0 stack in mstats; differs from MemStats.StackSys
var memstats
mTraceState is per-M state for the tracer.
// this M is in TraceStart, potentially before traceEnabled is true
// this M traced a STW start, so it should trace an end
muintptr is a *m that is not tracked by the garbage collector.
Because we do free Ms, there are some additional constrains on
muintptrs:
1. Never hold an muintptr locally across a safe point.
2. Any muintptr in the heap must be owned by the M itself so it can
ensure it is not in use when the last true *m is released.
( muintptr) ptr() *m
(*muintptr) set(m *m)
Mutual exclusion locks. In the uncontended case,
as fast as spin locks (just a few user-level instructions),
but on the contention path they sleep in the kernel.
A zeroed Mutex is unlocked (no need to initialize each lock).
Initialization is helpful for static lock ranking, but not required.
Futex-based impl treats it as uint32 key,
while sema-based impl as M* waitm.
Used to be a union, but unions break precise GC.
Empty struct if lock ranking is disabled, otherwise includes the lock rank
func assertLockHeld(l *mutex)
func assertWorldStoppedOrLockHeld(l *mutex)
func getLockRank(l *mutex) lockRank
func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
func lock(l *mutex)
func lock2(l *mutex)
func lockInit(l *mutex, rank lockRank)
func lockWithRank(l *mutex, rank lockRank)
func lockWithRankMayAcquire(l *mutex, rank lockRank)
func unlock(l *mutex)
func unlock2(l *mutex)
func unlockWithRank(l *mutex)
var allglock
var allpLock
var deadlock
var debuglock
var finlock
var itabLock
var netpollInitLock
var paniclk
var profBlockLock
var profInsertLock
var profMemActiveLock
var raceFiniLock
var tracelock
func goexit(neverCallThisFunction)
sleep and wakeup on one-time events.
before any calls to notesleep or notewakeup,
must call noteclear to initialize the Note.
then, exactly one thread can call notesleep
and exactly one thread can call notewakeup (once).
once notewakeup has been called, the notesleep
will return. future notesleep will return immediately.
subsequent noteclear must be called only after
previous notesleep has returned, e.g. it's disallowed
to call noteclear straight after notewakeup.
notetsleep is like notesleep but wakes up after
a given number of nanoseconds even if the event
has not yet happened. if a goroutine uses notetsleep to
wake up early, it must wait to call noteclear until it
can be sure that no other goroutine is calling
notewakeup.
notesleep/notetsleep are generally called on g0,
notetsleepg is similar to notetsleep but is called on user g.
Futex-based impl treats it as uint32 key,
while sema-based impl as M* waitm.
Used to be a union, but unions break precise GC.
func noteclear(n *note)
func notesleep(n *note)
func notetsleep(n *note, ns int64) bool
func notetsleep_internal(n *note, ns int64) bool
func notetsleepg(n *note, ns int64) bool
func notewakeup(n *note)
func sigNoteSetup(*note)
func sigNoteSleep(*note)
func sigNoteWakeup(*note)
notifyList is a ticket-based notification list used to implement sync.Cond.
It must be kept in sync with the sync package.
head *sudog
List of parked waiters.
notify is the ticket number of the next waiter to be notified. It can
be read outside the lock, but is only written to with lock held.
Both wait & notify can wrap around, and such cases will be correctly
handled as long as their "unwrapped" difference is bounded by 2^31.
For this not to be the case, we'd need to have 2^31+ goroutines
blocked on the same condvar, which is currently not possible.
tail *sudog
wait is the ticket number of the next waiter. It is atomically
incremented outside the lock.
func notifyListAdd(l *notifyList) uint32
func notifyListNotifyAll(l *notifyList)
func notifyListNotifyOne(l *notifyList)
func notifyListWait(l *notifyList, t uint32)
notInHeap is off-heap memory allocated by a lower-level allocator
like sysAlloc or persistentAlloc.
In general, it's better to use real types which embed
runtime/internal/sys.NotInHeap, but this serves as a generic type
for situations where that isn't possible (like in the allocators).
TODO: Use this as the return type of sysAlloc, persistentAlloc, etc?
(*notInHeap) add(bytes uintptr) *notInHeap
func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
var persistentChunks *notInHeap
A notInHeapSlice is a slice backed by runtime/internal/sys.NotInHeap memory.
array *notInHeap
cap int
len int
offAddr represents an address in a contiguous view
of the address space on systems where the address space is
segmented. On other systems, it's just a normal address.
a is just the virtual address, but should never be used
directly. Call addr() to get this value instead.
add adds a uintptr offset to the offAddr.
addr returns the virtual address for this offset address.
diff returns the amount of bytes in between the
two offAddrs.
equal returns true if the two offAddr values are equal.
lessEqual returns true if l1 is less than or equal to l2 in
the offset address space.
lessThan returns true if l1 is less than l2 in the offset
address space.
sub subtracts a uintptr offset from the offAddr.
func levelIndexToOffAddr(level, idx int) offAddr
func maxSearchAddr() offAddr
func offAddrToLevelIndex(level int, addr offAddr) int
var maxOffAddr
var minOffAddr
// pool of available defer structs (see panic.go)
deferpoolbuf [32]*_defer
Number of timerDeleted timers in P's heap.
Available G's (status == Gdead)
Per-P GC state
// Nanoseconds in assistAlloc
// Nanoseconds in fractional mark worker (atomic)
gcMarkWorkerMode is the mode for the next mark worker to run in.
That is, this is used to communicate with the worker goroutine
selected for immediate execution by
gcController.findRunnableGCWorker. When scheduling other goroutines,
this field must be set to gcMarkWorkerNotWorker.
gcMarkWorkerStartTime is the nanotime() at which the most recent
mark worker started.
gcw is this P's GC work buffer cache. The work buffer is
filled by write barriers, drained by mutator assists, and
disposed on certain GC state transitions.
Cache of goroutine ids, amortizes accesses to runtime·sched.goidgen.
goidcacheend uint64
id int32
limiterEvent tracks events for the GC CPU limiter.
link puintptr
// back-link to associated m (nil if idle)
maxStackScanDelta accumulates the amount of stack space held by
live goroutines (i.e. those eligible for stack scanning).
Flushed to gcController.maxStackScan once maxStackScanSlack
or -maxStackScanSlack is reached.
mcache *mcache
Cache of mspan objects from the heap.
Number of timers in P's heap.
pageTraceBuf is a buffer for writing out page allocation/free/scavenge traces.
Used only if GOEXPERIMENT=pagetrace.
// per-P to avoid mutex
pcache pageCache
Cache of a single pinner object to reduce allocations from repeated
pinner creation.
preempt is set to indicate that this P should be enter the
scheduler ASAP (regardless of what G is running on it).
raceprocctx uintptr
// if 1, run sched.safePointFn at next safe point
runnext, if non-nil, is a runnable G that was ready'd by
the current G and should be run next instead of what's in
runq if there's time remaining in the running G's time
slice. It will inherit the time left in the current time
slice. If a set of goroutines is locked in a
communicate-and-wait pattern, this schedules that set as a
unit and eliminates the (potentially large) scheduling
latency that otherwise arises from adding the ready'd
goroutines to the end of the run queue.
Note that while other P's may atomically CAS this to zero,
only the owner P can CAS it to a valid G.
runq [256]guintptr
Queue of runnable goroutines. Accessed without lock.
runqtail uint32
gc-time statistics about current goroutines
Note that this differs from maxStackScan in that this
accumulates the actual stack observed to be used at GC time (hi - sp),
not an instantaneous measure of the total stack size that might need
to be scanned (hi - lo).
// stack size of goroutines scanned by this P
// number of goroutines scanned by this P
// incremented on every scheduler call
statsSeq is a counter indicating whether this P is currently
writing any stats. Its value is even when not, odd when it is.
// one of pidle/prunning/...
sudogbuf [128]*sudog
sudogcache []*sudog
// incremented on every system call
// last tick observed by sysmon
The when field of the first entry on the timer heap.
This is 0 if the timer heap is empty.
The earliest known nextwhen field of a timer with
timerModifiedEarlier status. Because the timer may have been
modified again, there need not be any timer with this value.
This is 0 if there are no timerModifiedEarlier timers.
Race context used while executing timer functions.
Actions to take at some time. This is used to implement the
standard library's time package.
Must hold timersLock to access.
Lock for timers. We normally access the timers while running
on this P, but the scheduler can also do it from a different P.
trace pTraceState
wbBuf is this P's GC write barrier buffer.
TODO: Consider caching this in the running G.
destroy releases all of the resources associated with pp and
transitions it to status _Pdead.
sched.lock must be held and the world must be stopped.
init initializes pp, which may be a freshly allocated p or a
previously destroyed p, and transitions it to status _Pgcstop.
func checkIdleGCNoP() (*p, *g)
func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
func pidleget(now int64) (*p, int64)
func pidlegetSpinning(now int64) (*p, int64)
func procresize(nprocs int32) *p
func releasep() *p
func acquirep(pp *p)
func addAdjustedTimers(pp *p, moved []*timer)
func adjusttimers(pp *p, now int64)
func allocm(pp *p, fn func(), id int64) *m
func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
func checkTimers(pp *p, now int64) (rnow, pollUntil int64, ran bool)
func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64
func cleantimers(pp *p)
func clearDeletedTimers(pp *p)
func doaddtimer(pp *p, t *timer)
func dodeltimer(pp *p, i int) int
func dodeltimer0(pp *p)
func exitsyscallfast(oldp *p) bool
func gcMarkWorkAvailable(p *p) bool
func gfget(pp *p) *g
func gfpurge(pp *p)
func gfput(pp *p, gp *g)
func globrunqget(pp *p, max int32) *g
func handoffp(pp *p)
func moveTimers(pp *p, timers []*timer)
func newm(fn func(), pp *p, id int64)
func nobarrierWakeTime(pp *p) int64
func pageTraceAlloc(pp *p, now int64, base, npages uintptr)
func pageTraceFree(pp *p, now int64, base, npages uintptr)
func pageTraceScav(pp *p, now int64, base, npages uintptr)
func pidleput(pp *p, now int64) int64
func preemptone(pp *p) bool
func runOneTimer(pp *p, t *timer, now int64)
func runqdrain(pp *p) (drainQ gQueue, n uint32)
func runqempty(pp *p) bool
func runqget(pp *p) (gp *g, inheritTime bool)
func runqgrab(pp *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32
func runqput(pp *p, gp *g, next bool)
func runqputbatch(pp *p, q *gQueue, qsize int)
func runqputslow(pp *p, gp *g, h, t uint32) bool
func runqsteal(pp, p2 *p, stealRunNextG bool) *g
func runtimer(pp *p, now int64) int64
func startm(pp *p, spinning, lockheld bool)
func traceCPUSample(gp *g, pp *p, stk []uintptr)
func traceGoSysBlock(pp *p)
func traceProcFree(pp *p)
func traceProcStop(pp *p)
func updateTimer0When(pp *p)
func updateTimerModifiedEarliest(pp *p, nextwhen int64)
func updateTimerPMask(pp *p)
func verifyTimerHeap(pp *p)
func wbBufFlush1(pp *p)
func wirep(pp *p)
chunkHugePages indicates whether page bitmap chunks should be backed
by huge pages.
chunks is a slice of bitmap chunks.
The total size of chunks is quite large on most 64-bit platforms
(O(GiB) or more) if flattened, so rather than making one large mapping
(which has problems on some platforms, even when PROT_NONE) we use a
two-level sparse array approach similar to the arena index in mheap.
To find the chunk containing a memory address `a`, do:
chunkOf(chunkIndex(a))
Below is a table describing the configuration for chunks for various
heapAddrBits supported by the runtime.
heapAddrBits | L1 Bits | L2 Bits | L2 Entry Size
------------------------------------------------
32 | 0 | 10 | 128 KiB
33 (iOS) | 0 | 11 | 256 KiB
48 | 13 | 13 | 1 MiB
There's no reason to use the L1 part of chunks on 32-bit, the
address space is small so the L2 is small. For platforms with a
48-bit address space, we pick the L1 such that the L2 is 1 MiB
in size, which is a good balance between low granularity without
making the impact on BSS too high (note the L1 is stored directly
in pageAlloc).
To iterate over the bitmap, use inUse to determine which ranges
are currently available. Otherwise one might iterate over unused
ranges.
Protected by mheapLock.
TODO(mknyszek): Consider changing the definition of the bitmap
such that 1 means free and 0 means in-use so that summaries and
the bitmaps align better on zero-values.
start and end represent the chunk indices
which pageAlloc knows about. It assumes
chunks in the range [start, end) are
currently ready to use.
inUse is a slice of ranges of address space which are
known by the page allocator to be currently in-use (passed
to grow).
We care much more about having a contiguous heap in these cases
and take additional measures to ensure that, so in nearly all
cases this should have just 1 element.
All access is protected by the mheapLock.
mheap_.lock. This level of indirection makes it possible
to test pageAlloc independently of the runtime allocator.
scav stores the scavenger state.
The address to start an allocation search with. It must never
point to any memory that is not contained in inUse, i.e.
inUse.contains(searchAddr.addr()) must always be true. The one
exception to this rule is that it may take on the value of
maxOffAddr to indicate that the heap is exhausted.
We guarantee that all valid heap addresses below this value
are allocated and not worth searching.
start and end represent the chunk indices
which pageAlloc knows about. It assumes
chunks in the range [start, end) are
currently ready to use.
Radix tree of summaries.
Each slice's cap represents the whole memory reservation.
Each slice's len reflects the allocator's maximum known
mapped heap address for that level.
The backing store of each summary level is reserved in init
and may or may not be committed in grow (small address spaces
may commit all the memory in init).
The purpose of keeping len <= cap is to enforce bounds checks
on the top end of the slice so that instead of an unknown
runtime segmentation fault, we get a much friendlier out-of-bounds
error.
To iterate over a summary level, use inUse to determine which ranges
are currently available. Otherwise one might try to access
memory which is only Reserved which may result in a hard fault.
We may still get segmentation faults < len since some of that
memory may not be committed yet.
summaryMappedReady is the number of bytes mapped in the Ready state
in the summary structure. Used only for testing currently.
Protected by mheapLock.
sysStat is the runtime memstat to update when new system
memory is committed by the pageAlloc for allocation metadata.
Whether or not this struct is being used in tests.
alloc allocates npages worth of memory from the page heap, returning the base
address for the allocation and the amount of scavenged memory in bytes
contained in the region [base address, base address + npages*pageSize).
Returns a 0 base address on failure, in which case other returned values
should be ignored.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
allocRange marks the range of memory [base, base+npages*pageSize) as
allocated. It also updates the summaries to reflect the newly-updated
bitmap.
Returns the amount of scavenged memory in bytes present in the
allocated range.
p.mheapLock must be held.
allocToCache acquires a pageCachePages-aligned chunk of free pages which
may not be contiguous, and returns a pageCache structure which owns the
chunk.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
chunkOf returns the chunk at the given chunk index.
The chunk index must be valid or this method may throw.
enableChunkHugePages enables huge pages for the chunk bitmap mappings (disabled by default).
This function is idempotent.
A note on latency: for sufficiently small heaps (<10s of GiB) this function will take constant
time, but may take time proportional to the size of the mapped heap beyond that.
The heap lock must not be held over this operation, since it will briefly acquire
the heap lock.
Must be called on the system stack because it acquires the heap lock.
find searches for the first (address-ordered) contiguous free region of
npages in size and returns a base address for that region.
It uses p.searchAddr to prune its search and assumes that no palloc chunks
below chunkIndex(p.searchAddr) contain any free memory at all.
find also computes and returns a candidate p.searchAddr, which may or
may not prune more of the address space than p.searchAddr already does.
This candidate is always a valid p.searchAddr.
find represents the slow path and the full radix tree search.
Returns a base address of 0 on failure, in which case the candidate
searchAddr returned is invalid and must be ignored.
p.mheapLock must be held.
findMappedAddr returns the smallest mapped offAddr that is
>= addr. That is, if addr refers to mapped memory, then it is
returned. If addr is higher than any mapped region, then
it returns maxOffAddr.
p.mheapLock must be held.
free returns npages worth of memory starting at base back to the page heap.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
grow sets up the metadata for the address range [base, base+size).
It may allocate metadata, in which case *p.sysStat will be updated.
p.mheapLock must be held.
(*pageAlloc) init(mheapLock *mutex, sysStat *sysMemStat, test bool)
scavenge scavenges nbytes worth of free pages, starting with the
highest address first. Successive calls continue from where it left
off until the heap is exhausted. force makes all memory available to
scavenge, ignoring huge page heuristics.
Returns the amount of memory scavenged in bytes.
scavenge always tries to scavenge nbytes worth of memory, and will
only fail to do so if the heap is exhausted for now.
scavengeOne walks over the chunk at chunk index ci and searches for
a contiguous run of pages to scavenge. It will try to scavenge
at most max bytes at once, but may scavenge more to avoid
breaking huge pages. Once it scavenges some memory it returns
how much it scavenged in bytes.
searchIdx is the page index to start searching from in ci.
Returns the number of bytes scavenged.
Must run on the systemstack because it acquires p.mheapLock.
sysGrow performs architecture-dependent operations on heap
growth for the page allocator, such as mapping in new memory
for summaries. It also updates the length of the slices in
p.summary.
base is the base of the newly-added heap memory and limit is
the first address past the end of the newly-added heap memory.
Both must be aligned to pallocChunkBytes.
The caller must update p.start and p.end after calling sysGrow.
sysInit performs architecture-dependent initialization of fields
in pageAlloc. pageAlloc should be uninitialized except for sysStat
if any runtime statistic should be updated.
tryChunkOf returns the bitmap data for the given chunk.
Returns nil if the chunk data has not been mapped.
update updates heap metadata. It must be called each time the bitmap
is updated.
If contig is true, update does some optimizations assuming that there was
a contiguous allocation or free between addr and addr+npages. alloc indicates
whether the operation performed was an allocation or a free.
p.mheapLock must be held.
pageBits is a bitmap representing one bit per page in a palloc chunk.
block64 returns the 64-bit aligned block of bits containing the i'th bit.
clear clears bit i of pageBits.
clearAll frees all the bits of b.
clearBlock64 clears the 64-bit aligned block of bits containing the i'th bit that
are set in v.
clearRange clears bits in the range [i, i+n).
get returns the value of the i'th bit in the bitmap.
popcntRange counts the number of set bits in the
range [i, i+n).
set sets bit i of pageBits.
setAll sets all the bits of b.
setBlock64 sets the 64-bit aligned block of bits containing the i'th bit that
are set in v.
setRange sets bits in the range [i, i+n).
pageCache represents a per-p cache of pages the allocator can
allocate from without a lock. More specifically, it represents
a pageCachePages*pageSize chunk of memory with 0 or more free
pages in it.
// base address of the chunk
// 64-bit bitmap representing free pages (1 means free)
// 64-bit bitmap representing scavenged pages (1 means scavenged)
alloc allocates npages from the page cache and is the main entry
point for allocation.
Returns a base address and the amount of scavenged memory in the
allocated region in bytes.
Returns a base address of zero on failure, in which case the
amount of scavenged memory should be ignored.
allocN is a helper which attempts to allocate npages worth of pages
from the cache. It represents the general case for allocating from
the page cache.
Returns a base address and the amount of scavenged memory in the
allocated region in bytes.
empty reports whether the page cache has no free pages.
flush empties out unallocated free pages in the given cache
into s. Then, it clears the cache, such that empty returns
true.
p.mheapLock must be held.
Must run on the system stack because p.mheapLock must be held.
type pageTraceBuf (struct)
pallocBits is a bitmap that tracks page allocations for at most one
palloc chunk.
The precise representation is an implementation detail, but for the
sake of documentation, 0s are free pages and 1s are allocated pages.
allocAll allocates all the bits of b.
allocPages64 allocates a 64-bit block of 64 pages aligned to 64 pages according
to the bits set in alloc. The block set is the one containing the i'th page.
allocRange allocates the range [i, i+n).
find searches for npages contiguous free pages in pallocBits and returns
the index where that run starts, as well as the index of the first free page
it found in the search. searchIdx represents the first known free page and
where to begin the next search from.
If find fails to find any free space, it returns an index of ^uint(0) and
the new searchIdx should be ignored.
Note that if npages == 1, the two returned values will always be identical.
find1 is a helper for find which searches for a single free page
in the pallocBits and returns the index.
See find for an explanation of the searchIdx parameter.
findLargeN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run starts, as well as the
index of the first free page it found it its search.
See alloc for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findLargeN assumes npages > 64, where any such run of free pages
crosses at least one aligned 64-bit boundary in the bits.
findSmallN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run of contiguous pages
starts as well as the index of the first free page it finds in its search.
See find for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findSmallN assumes npages <= 64, where any such contiguous run of pages
crosses at most one aligned 64-bit boundary in the bits.
free frees the range [i, i+n) of pages in the pallocBits.
free1 frees a single page in the pallocBits at i.
freeAll frees all the bits of b.
pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
to 64 pages. The returned block of pages is the one containing the i'th
page in this pallocBits. Each bit represents whether the page is in-use.
summarize returns a packed summary of the bitmap in pallocBits.
pallocData encapsulates pallocBits and a bitmap for
whether or not a given page is scavenged in a single
structure. It's effectively a pallocBits with
additional functionality.
Update the comment on (*pageAlloc).chunks should this
structure change.
pallocBits pallocBits
scavenged pageBits
allocAll sets every bit in the bitmap to 1 and updates
the scavenged bits appropriately.
allocPages64 allocates a 64-bit block of 64 pages aligned to 64 pages according
to the bits set in alloc. The block set is the one containing the i'th page.
allocRange sets bits [i, i+n) in the bitmap to 1 and
updates the scavenged bits appropriately.
find searches for npages contiguous free pages in pallocBits and returns
the index where that run starts, as well as the index of the first free page
it found in the search. searchIdx represents the first known free page and
where to begin the next search from.
If find fails to find any free space, it returns an index of ^uint(0) and
the new searchIdx should be ignored.
Note that if npages == 1, the two returned values will always be identical.
find1 is a helper for find which searches for a single free page
in the pallocBits and returns the index.
See find for an explanation of the searchIdx parameter.
findLargeN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run starts, as well as the
index of the first free page it found it its search.
See alloc for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findLargeN assumes npages > 64, where any such run of free pages
crosses at least one aligned 64-bit boundary in the bits.
findScavengeCandidate returns a start index and a size for this pallocData
segment which represents a contiguous region of free and unscavenged memory.
searchIdx indicates the page index within this chunk to start the search, but
note that findScavengeCandidate searches backwards through the pallocData. As
a result, it will return the highest scavenge candidate in address order.
min indicates a hard minimum size and alignment for runs of pages. That is,
findScavengeCandidate will not return a region smaller than min pages in size,
or that is min pages or greater in size but not aligned to min. min must be
a non-zero power of 2 <= maxPagesPerPhysPage.
max is a hint for how big of a region is desired. If max >= pallocChunkPages, then
findScavengeCandidate effectively returns entire free and unscavenged regions.
If max < pallocChunkPages, it may truncate the returned region such that size is
max. However, findScavengeCandidate may still return a larger region if, for
example, it chooses to preserve huge pages, or if max is not aligned to min (it
will round up). That is, even if max is small, the returned size is not guaranteed
to be equal to max. max is allowed to be less than min, in which case it is as if
max == min.
findSmallN is a helper for find which searches for npages contiguous free pages
in this pallocBits and returns the index where that run of contiguous pages
starts as well as the index of the first free page it finds in its search.
See find for an explanation of the searchIdx parameter.
Returns a ^uint(0) index on failure and the new searchIdx should be ignored.
findSmallN assumes npages <= 64, where any such contiguous run of pages
crosses at most one aligned 64-bit boundary in the bits.
free frees the range [i, i+n) of pages in the pallocBits.
free1 frees a single page in the pallocBits at i.
freeAll frees all the bits of b.
pages64 returns a 64-bit bitmap representing a block of 64 pages aligned
to 64 pages. The returned block of pages is the one containing the i'th
page in this pallocBits. Each bit represents whether the page is in-use.
summarize returns a packed summary of the bitmap in pallocBits.
pallocSum is a packed summary type which packs three numbers: start, max,
and end into a single 8-byte value. Each of these values are a summary of
a bitmap and are thus counts, each of which may have a maximum value of
2^21 - 1, or all three may be equal to 2^21. The latter case is represented
by just setting the 64th bit.
end extracts the end value from a packed sum.
max extracts the max value from a packed sum.
start extracts the start value from a packed sum.
unpack unpacks all three values from the summary.
func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
func packPallocSum(start, max, end uint) pallocSum
func mergeSummaries(sums []pallocSum, logMaxPagesPerSum uint) pallocSum
const freeChunkSum
pcHeader holds data used by the pclntab lookups.
// offset to the cutab variable from pcHeader
// offset to the filetab variable from pcHeader
// offset to the funcnametab variable from pcHeader
// 0xFFFFFFF1
// min instruction size
// number of entries in the file tab
// number of functions in the module
// 0,0
// 0,0
// offset to the pclntab variable from pcHeader
// offset to the pctab variable from pcHeader
// size of a ptr in bytes
// base for function entry PC offsets in this module, equal to moduledata.text
entries [2][8]pcvalueCacheEnt
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
func newInlineUnwinder(f funcInfo, pc uintptr, cache *pcvalueCache) (inlineUnwinder, inlineFrame)
func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32
func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
func pcvalue(f funcInfo, off uint32, targetpc uintptr, cache *pcvalueCache, strict bool) (int32, uintptr)
off uint32
targetpc and off together are the key of this cache entry.
val is the value of this cached pcvalue entry.
perThreadSyscallArgs contains the system call number, arguments, and
expected return values for a system call to be executed on all threads.
a1 uintptr
a2 uintptr
a3 uintptr
a4 uintptr
a5 uintptr
a6 uintptr
r1 uintptr
r2 uintptr
trap uintptr
var perThreadSyscall
// Integral of the error from t=0 to now.
Error flags.
// Set if errIntegral ever overflowed.
// Set if an operation with the input overflowed.
// Proportional constant.
// Output boundaries.
// Output boundaries.
// Integral time constant.
// Reset time.
next provides a new sample to the controller.
input is the sample, setpoint is the desired point, and period is how much
time (in whatever unit makes the most sense) has passed since the last sample.
Returns a new value for the variable it's controlling, and whether the operation
completed successfully. One reason this might fail is if error has been growing
in an unbounded manner, to the point of overflow.
In the specific case of an error overflow occurs, the errOverflow field will be
set and the rest of the controller's internal state will be fully reset.
reset resets the controller state, except for controller error flags.
pinnerBits is the same type as gcBits but has different methods.
x uint8
ofObject returns the pinState of the n'th object.
nosplit, because it's called by isPinned, which is nosplit
byteVal uint8
bytep *uint8
mask uint8
(*pinState) isMultiPinned() bool
nosplit, because it's called by isPinned, which is nosplit
set sets the pin bit of the pinState to val. If multipin is true, it
sets/unsets the multipin bit instead.
(*pinState) setMultiPinned(val bool)
(*pinState) setPinned(val bool)
plainError represents a runtime error described a string without
the prefix "runtime error: " after invoking errorString.Error().
See Issue #14965.
( plainError) Error() string
( plainError) RuntimeError()
plainError : Error
plainError : error
pMask is an atomic bitstring with one bit per P.
clear clears P id's bit.
read returns true if P id's bit is set.
set sets P id's bit.
func checkRunqsNoP(allpSnapshot []*p, idlepMaskSnapshot pMask) *p
func checkTimersNoP(allpSnapshot []*p, timerpMaskSnapshot pMask, pollUntil int64) int64
var idlepMask
var timerpMask
first *pollDesc
lock mutex
(*pollCache) alloc() *pollDesc
(*pollCache) free(pd *pollDesc)
var pollcache
Network poller descriptor.
No heap pointers.
atomicInfo holds bits from closing, rd, and wd,
which are only ever written while holding the lock,
summarized for use by netpollcheckerr,
which cannot acquire the lock.
After writing these fields under lock in a way that
might change the summary, code must call publishInfo
before releasing the lock.
Code that changes fields and then calls netpollunblock
(while still holding the lock) must call publishInfo
before calling netpollunblock, because publishInfo is what
stops netpollblock from blocking anew
(by changing the result of netpollcheckerr).
atomicInfo also holds the eventErr bit,
recording whether a poll event on the fd got an error;
atomicInfo is the only source of truth for that bit.
// atomic pollInfo
closing bool
// constant for pollDesc usage lifetime
// protects against stale pollDesc
// in pollcache, protected by pollcache.lock
// protects the following fields
// read deadline (a nanotime in the future, -1 when expired)
rg, wg are accessed atomically and hold g pointers.
(Using atomic.Uintptr here is similar to using guintptr elsewhere.)
// pdReady, pdWait, G waiting for read or pdNil
// protects from stale read timers
// read deadline timer (set if rt.f != nil)
// storage for indirect interface. See (*pollDesc).makeArg.
// user settable cookie
// write deadline (a nanotime in the future, -1 when expired)
// pdReady, pdWait, G waiting for write or pdNil
// protects from stale write timers
// write deadline timer
info returns the pollInfo corresponding to pd.
makeArg converts pd to an interface{}.
makeArg does not do any allocation. Normally, such
a conversion requires an allocation because pointers to
types which embed runtime/internal/sys.NotInHeap (which pollDesc is)
must be stored in interfaces indirectly. See issue 42076.
publishInfo updates pd.atomicInfo (returned by pd.info)
using the other values in pd.
It must be called while holding pd.lock,
and it must be called after changing anything
that might affect the info bits.
In practice this means after changing closing
or changing rd or wd from < 0 to >= 0.
setEventErr sets the result of pd.info().eventErr() to b.
We only change the error bit if seq == 0 or if seq matches pollFDSeq
(issue #59545).
func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
func netpollarm(pd *pollDesc, mode int)
func netpollblock(pd *pollDesc, mode int32, waitio bool) bool
func netpollcheckerr(pd *pollDesc, mode int32) int
func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool)
func netpollopen(fd uintptr, pd *pollDesc) uintptr
func netpollready(toRun *gList, pd *pollDesc, mode int32)
func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g
func poll_runtime_pollClose(pd *pollDesc)
func poll_runtime_pollReset(pd *pollDesc, mode int) int
func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)
func poll_runtime_pollUnblock(pd *pollDesc)
func poll_runtime_pollWait(pd *pollDesc, mode int) int
func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int)
pollInfo is the bits needed by netpollcheckerr, stored atomically,
mostly duplicating state that is manipulated under lock in pollDesc.
The one exception is the pollEventErr bit, which is maintained only
in the pollInfo.
( pollInfo) closing() bool
( pollInfo) eventErr() bool
( pollInfo) expiredReadDeadline() bool
( pollInfo) expiredWriteDeadline() bool
A profAtomic is the atomically-accessed word holding a profIndex.
(*profAtomic) cas(old, new profIndex) bool
(*profAtomic) load() profIndex
(*profAtomic) store(new profIndex)
A profBuf is a lock-free buffer for profiling events,
safe for concurrent use by one reader and one writer.
The writer may be a signal handler running without a user g.
The reader is assumed to be a user g.
Each logged event corresponds to a fixed size header, a list of
uintptrs (typically a stack), and exactly one unsafe.Pointer tag.
The header and uintptrs are stored in the circular buffer data and the
tag is stored in a circular buffer tags, running in parallel.
In the circular buffer data, each event takes 2+hdrsize+len(stk)
words: the value 2+hdrsize+len(stk), then the time of the event, then
hdrsize words giving the fixed-size header, and then len(stk) words
for the stack.
The current effective offsets into the tags and data circular buffers
for reading and writing are stored in the high 30 and low 32 bits of r and w.
The bottom bits of the high 32 are additional flag bits in w, unused in r.
"Effective" offsets means the total number of reads or writes, mod 2^length.
The offset in the buffer is the effective offset mod the length of the buffer.
To make wraparound mod 2^length match wraparound mod length of the buffer,
the length of the buffer must be a power of two.
If the reader catches up to the writer, a flag passed to read controls
whether the read blocks until more data is available. A read returns a
pointer to the buffer data itself; the caller is assumed to be done with
that data at the next read. The read offset rNext tracks the next offset to
be returned by read. By definition, r ≤ rNext ≤ w (before wraparound),
and rNext is only used by the reader, so it can be accessed without atomics.
If the writer gets ahead of the reader, so that the buffer fills,
future writes are discarded and replaced in the output stream by an
overflow entry, which has size 2+hdrsize+1, time set to the time of
the first discarded write, a header of all zeroed words, and a "stack"
containing one word, the number of discarded writes.
Between the time the buffer fills and the buffer becomes empty enough
to hold more data, the overflow entry is stored as a pending overflow
entry in the fields overflow and overflowTime. The pending overflow
entry can be turned into a real record by either the writer or the
reader. If the writer is called to write a new record and finds that
the output buffer has room for both the pending overflow entry and the
new record, the writer emits the pending overflow entry and the new
record into the buffer. If the reader is called to read data and finds
that the output buffer is empty but that there is a pending overflow
entry, the reader will return a synthesized record for the pending
overflow entry.
Only the writer can create or add to a pending overflow entry, but
either the reader or the writer can clear the pending overflow entry.
A pending overflow entry is indicated by the low 32 bits of 'overflow'
holding the number of discarded writes, and overflowTime holding the
time of the first discarded write. The high 32 bits of 'overflow'
increment each time the low 32 bits transition from zero to non-zero
or vice versa. This sequence number avoids ABA problems in the use of
compare-and-swap to coordinate between reader and writer.
The overflowTime is only written when the low 32 bits of overflow are
zero, that is, only when there is no pending overflow entry, in
preparation for creating a new one. The reader can therefore fetch and
clear the entry atomically using
for {
overflow = load(&b.overflow)
if uint32(overflow) == 0 {
// no pending entry
break
}
time = load(&b.overflowTime)
if cas(&b.overflow, overflow, ((overflow>>32)+1)<<32) {
// pending entry cleared
break
}
}
if uint32(overflow) > 0 {
emit entry for uint32(overflow), time
}
data []uint64
eof atomic.Uint32
immutable (excluding slice content)
overflow atomic.Uint64
// for use by reader to return overflow record
overflowTime atomic.Uint64
accessed atomically
owned by reader
tags []unsafe.Pointer
accessed atomically
wait note
canWriteRecord reports whether the buffer has room
for a single contiguous record with a stack of length nstk.
canWriteTwoRecords reports whether the buffer has room
for two records with stack lengths nstk1, nstk2, in that order.
Each record must be contiguous on its own, but the two
records need not be contiguous (one can be at the end of the buffer
and the other can wrap around and start at the beginning of the buffer).
close signals that there will be no more writes on the buffer.
Once all the data has been read from the buffer, reads will return eof=true.
hasOverflow reports whether b has any overflow records pending.
incrementOverflow records a single overflow at time now.
It is racing against a possible takeOverflow in the reader.
(*profBuf) read(mode profBufReadMode) (data []uint64, tags []unsafe.Pointer, eof bool)
takeOverflow consumes the pending overflow records, returning the overflow count
and the time of the first overflow.
When called by the reader, it is racing against incrementOverflow.
wakeupExtra must be called after setting one of the "extra"
atomic fields b.overflow or b.eof.
It records the change in b.w and wakes up the reader if needed.
write writes an entry to the profiling buffer b.
The entry begins with a fixed hdr, which must have
length b.hdrsize, followed by a variable-sized stack
and a single tag pointer *tagPtr (or nil if tagPtr is nil).
No write barriers allowed because this might be called from a signal handler.
func newProfBuf(hdrsize, bufwords, tags int) *profBuf
profBufReadMode specifies whether to block when no data is available to read.
const profBufBlocking
const profBufNonBlocking
A profIndex is the packet tag and data counts and flags bits, described above.
addCountsAndClearFlags returns the packed form of "x + (data, tag) - all flags".
( profIndex) dataCount() uint32
( profIndex) tagCount() uint32
const profReaderSleeping
const profWriteExtra
A ptabEntry is generated by the compiler for each exported function
and global variable in the main package of a plugin. It is used to
initialize the plugin module's symbol map.
name nameOff
typ typeOff
pTraceState is per-P state for the tracer.
buf traceBufPtr
inSweep indicates the sweep events should be traced.
This is used to defer the sweep start event until a span
has actually been swept.
swept and reclaimed track the number of bytes swept and reclaimed
by sweeping in the current sweep loop (while inSweep was true).
swept and reclaimed track the number of bytes swept and reclaimed
by sweeping in the current sweep loop (while inSweep was true).
count uint32
i uint32
inc uint32
pos uint32
(*randomEnum) done() bool
(*randomEnum) next()
(*randomEnum) position() uint32
randomOrder/randomEnum are helper types for randomized work stealing.
They allow to enumerate all Ps in different pseudo-random orders without repetitions.
The algorithm is based on the fact that if we have X such that X and GOMAXPROCS
are coprime, then a sequences of (i + X) % GOMAXPROCS gives the required enumeration.
coprimes []uint32
count uint32
(*randomOrder) reset(count uint32)
(*randomOrder) start(i uint32) randomEnum
var stealOrder
reflectMethodValue is a partial duplicate of reflect.makeFuncImpl
and reflect.methodValue.
// just args
fn uintptr
// ptrmap for both args and results
rtype is a wrapper that allows us to define additional methods.
// embedding is okay here (unlike reflect) because none of this is public
// alignment of variable with this type
function for comparing objects of this type
(ptr to object A, ptr to object B) -> ==?
// alignment of struct field with this type
GCData stores the GC type data for the garbage collector.
If the KindGCProg bit is set in kind, GCData is a GC program.
Otherwise it is a ptrmask bitmap. See mbitmap.go for details.
// hash of type; avoids computation in hash tables
// enumeration for C
// number of (prefix) bytes in the type that can contain pointers
// type for pointer to this type, may be zero
Type.Size_ uintptr
// string form
// extra type information flags
Align returns the alignment of data with type t.
ArrayType returns t cast to a *ArrayType, or nil if its tag does not match.
ChanDir returns the direction of t if t is a channel type, otherwise InvalidDir (0).
( rtype) Common() *abi.Type
Elem returns the element type for t if t is an array, channel, map, pointer, or slice, otherwise nil.
( rtype) ExportedMethods() []abi.Method
( rtype) FieldAlign() int
FuncType returns t cast to a *FuncType, or nil if its tag does not match.
( rtype) GcSlice(begin, end uintptr) []byte
( rtype) HasName() bool
IfaceIndir reports whether t is stored indirectly in an interface value.
InterfaceType returns t cast to a *InterfaceType, or nil if its tag does not match.
isDirectIface reports whether t is stored directly in an interface value.
( rtype) Key() *abi.Type
( rtype) Kind() abi.Kind
Len returns the length of t if t is an array type, otherwise 0
MapType returns t cast to a *MapType, or nil if its tag does not match.
( rtype) NumMethod() int
( rtype) Pointers() bool
Size returns the size of data with type t.
StructType returns t cast to a *StructType, or nil if its tag does not match.
Uncommon returns a pointer to T's "uncommon" data if there is any, otherwise nil
( rtype) name() string
( rtype) nameOff(off nameOff) name
pkgpath returns the path of the package where t was defined, if
available. This is not the same as the reflect package's PkgPath
method, in that it returns the package path for struct and interface
types, not just named types.
( rtype) string() string
( rtype) textOff(off textOff) unsafe.Pointer
( rtype) typeOff(off typeOff) *_type
( rtype) uncommon() *uncommontype
func toRType(t *abi.Type) rtype
A runtimeSelect is a single case passed to rselect.
This must match ../reflect/value.go:/runtimeSelect
// channel
dir selectDir
// channel type (not used here)
// ptr to data (SendDir) or ptr to receive buffer (RecvDir)
func reflect_rselect(cases []runtimeSelect) (int, bool)
A rwmutex is a reader/writer mutual exclusion lock.
The lock can be held by an arbitrary number of readers or a single writer.
This is a variant of sync.RWMutex, for the runtime package.
Like mutex, rwmutex blocks the calling M.
It does not interact with the goroutine scheduler.
// protects readers, readerPass, writer
// number of pending readers
// number of pending readers to skip readers list
// number of departing readers
// list of pending readers
// serializes writers
// pending writer waiting for completing readers
lock locks rw for writing.
rlock locks rw for reading.
runlock undoes a single rlock call on rw.
unlock unlocks rw for writing.
var allocmLock
var execLock
Select case descriptor.
Known to compiler.
Changes here must also be made in src/cmd/compile/internal/walk/select.go's scasetype.
// chan
// data element
func selectgo(cas0 *scase, order0 *uint16, pc0 *uintptr, nsends, nrecvs int, block bool) (int, bool)
func sellock(scases []scase, lockorder []uint16)
func selunlock(scases []scase, lockorder []uint16)
scavChunkData tracks information about a palloc chunk for
scavenging. It packs well into 64 bits.
The zero value always represents a valid newly-grown chunk.
gen is the generation counter from a scavengeIndex from the
last time this scavChunkData was updated.
inUse indicates how many pages in this chunk are currently
allocated.
Only the first 10 bits are used.
lastInUse indicates how many pages in this chunk were allocated
when we transitioned from gen-1 to gen.
Only the first 10 bits are used.
scavChunkFlags represents additional flags
Note: only 6 bits are available.
alloc updates sc given that npages were allocated in the corresponding chunk.
free updates sc given that npages was freed in the corresponding chunk.
isEmpty returns true if the hasFree flag is unset.
isHugePage returns false if the noHugePage flag is set.
pack returns sc packed into a uint64.
setEmpty clears the hasFree flag.
setHugePage clears the noHugePage flag.
setNoHugePage sets the noHugePage flag.
setNonEmpty sets the hasFree flag.
shouldScavenge returns true if the corresponding chunk should be interrogated
by the scavenger.
func unpackScavChunkData(sc uint64) scavChunkData
scavChunkFlags is a set of bit-flags for the scavenger for each palloc chunk.
isEmpty returns true if the hasFree flag is unset.
isHugePage returns false if the noHugePage flag is set.
setEmpty clears the hasFree flag.
setHugePage clears the noHugePage flag.
setNoHugePage sets the noHugePage flag.
setNonEmpty sets the hasFree flag.
const scavChunkHasFree
const scavChunkNoHugePage
scavengeIndex is a structure for efficiently managing which pageAlloc chunks have
memory available to scavenge.
chunks is a scavChunkData-per-chunk structure that indicates the presence of pages
available for scavenging. Updates to the index are serialized by the pageAlloc lock.
It tracks chunk occupancy and a generation counter per chunk. If a chunk's occupancy
never exceeds pallocChunkDensePages over the course of a single GC cycle, the chunk
becomes eligible for scavenging on the next cycle. If a chunk ever hits this density
threshold it immediately becomes unavailable for scavenging in the current cycle as
well as the next.
[min, max) represents the range of chunks that is safe to access (i.e. will not cause
a fault). As an optimization minHeapIdx represents the true minimum chunk that has been
mapped, since min is likely rounded down to include the system page containing minHeapIdx.
For a chunk size of 4 MiB this structure will only use 2 MiB for a 1 TiB contiguous heap.
freeHWM is the highest address (in offset address space) that was freed
this generation.
Generation counter. Updated by nextGen at the end of each mark phase.
max atomic.Uintptr
min atomic.Uintptr
minHeapIdx atomic.Uintptr
searchAddr* is the maximum address (in the offset address space, so we have a linear
view of the address space; see mranges.go:offAddr) containing memory available to
scavenge. It is a hint to the find operation to avoid O(n^2) behavior in repeated lookups.
searchAddr* is always inclusive and should be the base address of the highest runtime
page available for scavenging.
searchAddrForce is managed by find and free.
searchAddrBg is managed by find and nextGen.
Normally, find monotonically decreases searchAddr* as it finds no more free pages to
scavenge. However, mark, when marking a new chunk at an index greater than the current
searchAddr, sets searchAddr to the *negative* index into chunks of that page. The trick here
is that concurrent calls to find will fail to monotonically decrease searchAddr*, and so they
won't barge over new memory becoming available to scavenge. Furthermore, this ensures
that some future caller of find *must* observe the new high index. That caller
(or any other racing with it), then makes searchAddr positive before continuing, bringing
us back to our monotonically decreasing steady-state.
A pageAlloc lock serializes updates between min, max, and searchAddr, so abs(searchAddr)
is always guaranteed to be >= min and < max (converted to heap addresses).
searchAddrBg is increased only on each new generation and is mainly used by the
background scavenger and heap-growth scavenging. searchAddrForce is increased continuously
as memory gets freed and is mainly used by eager memory reclaim such as debug.FreeOSMemory
and scavenging to maintain the memory limit.
searchAddrForce atomicOffAddr
test indicates whether or not we're in a test.
alloc updates metadata for chunk at index ci with the fact that
an allocation of npages occurred. It also eagerly attempts to collapse
the chunk's memory into hugepage if the chunk has become sufficiently
dense and we're not allocating the whole chunk at once (which suggests
the allocation is part of a bigger one and it's probably not worth
eagerly collapsing).
alloc may only run concurrently with find.
find returns the highest chunk index that may contain pages available to scavenge.
It also returns an offset to start searching in the highest chunk.
free updates metadata for chunk at index ci with the fact that
a free of npages occurred.
free may only run concurrently with find.
sysGrow updates the index's backing store in response to a heap growth.
Returns the amount of memory added to sysStat.
init initializes the scavengeIndex.
Returns the amount added to sysStat.
nextGen moves the scavenger forward one generation. Must be called
once per GC cycle, but may be called more often to force more memory
to be released.
nextGen may only run concurrently with find.
setEmpty marks that the scavenger has finished looking at ci
for now to prevent the scavenger from getting stuck looking
at the same chunk.
setEmpty may only run concurrently with find.
setNoHugePage updates the backed-by-hugepages status of a particular chunk.
Returns true if the set was successful (not already backed by huge pages).
setNoHugePage may only run concurrently with find.
sysGrow increases the index's backing store in response to a heap growth.
Returns the amount of memory added to sysStat.
sysInit initializes the scavengeIndex' chunks array.
Returns the amount of memory added to sysStat.
cooldown is the time left in nanoseconds during which we avoid
using the controller and we hold sleepRatio at a conservative
value. Used if the controller's assumptions fail to hold.
g is the goroutine the scavenger is bound to.
gomaxprocs returns the current value of gomaxprocs. Stub for testing.
If this is nil, it is populated with the real thing in init.
lock protects all fields below.
parked is whether or not the scavenger is parked.
printControllerReset instructs printScavTrace to signal that
the controller was reset.
scavenge is a function that scavenges n bytes of memory.
Returns how many bytes of memory it actually scavenged, as
well as the time it took in nanoseconds. Usually mheap.pages.scavenge
with nanotime called around it, but stubbed out for testing.
Like mheap.pages.scavenge, if it scavenges less than n bytes of
memory, the caller may assume the heap is exhausted of scavengable
memory for now.
If this is nil, it is populated with the real thing in init.
shouldStop is a callback called in the work loop and provides a
point that can force the scavenger to stop early, for example because
the scavenge policy dictates too much has been scavenged already.
If this is nil, it is populated with the real thing in init.
sleepController controls sleepRatio.
See sleepRatio for more details.
sleepRatio is the ratio of time spent doing scavenging work to
time spent sleeping. This is used to decide how long the scavenger
should sleep for in between batches of work. It is set by
critSleepController in order to maintain a CPU overhead of
targetCPUFraction.
Lower means more sleep, higher means more aggressive scavenging.
sleepStub is a stub used for testing to avoid actually having
the scavenger sleep.
Unlike the other stubs, this is not populated if left nil
Instead, it is called when non-nil because any valid implementation
of this function basically requires closing over this scavenger
state, and allocating a closure is not allowed in the runtime as
a matter of policy.
sysmonWake signals to sysmon that it should wake the scavenger.
targetCPUFraction is the target CPU overhead for the scavenger.
timer is the timer used for the scavenger to sleep.
controllerFailed indicates that the scavenger's scheduling
controller failed.
init initializes a scavenger state and wires to the current G.
Must be called from a regular goroutine that can allocate.
park parks the scavenger goroutine.
ready signals to sysmon that the scavenger should be awoken.
run is the body of the main scavenging loop.
Returns the number of bytes released and the estimated time spent
releasing those bytes.
Must be run on the scavenger goroutine.
sleep puts the scavenger to sleep based on the amount of time that it worked
in nanoseconds.
Note that this function should only be called by the scavenger.
The scavenger may be woken up earlier by a pacing change, and it may not go
to sleep at all if there's a pending pacing change.
wake immediately unparks the scavenger if necessary.
Safe to run without a P.
var scavenger
Central pool of available defer structs.
deferpool *_defer
disable controls selective disabling of the scheduler.
Use schedEnableUser to control this.
disable is protected by sched.lock.
freem is the list of m's waiting to be freed when their
m.exited is set. Linked through m.freelink.
Global cache of dead G's.
// gc is waiting to run
goidgen atomic.Uint64
idleTime is the total CPU time Ps have "spent" idle.
Reset on each GC cycle.
// time of last network poll, 0 if currently polling
lock mutex
// maximum number of m's allowed (or die)
// idle m's waiting for work
// number of m's that have been created and next M ID
// See "Delicate dance" comment in proc.go. Boolean. Must hold sched.lock to set to 1.
// number of system goroutines
// cumulative number of freed m's
// number of idle m's waiting for work
// number of locked m's waiting for work
// See "Worker thread parking/unparking" comment in proc.go.
// number of system m's not counted for deadlock
npidle atomic.Int32
// idle p's
// time to which current poll is sleeping
// nanotime() of last change to gomaxprocs
// cpu profiling rate
Global runnable queue.
runqsize int32
safepointFn should be called on each P at the next GC
safepoint if p.runSafePointFn is set.
safePointNote note
safePointWait int32
stopnote note
stopwait int32
sudogcache *sudog
Central cache of sudog structs.
sysmonlock protects sysmon's actions on the runtime.
Acquire and hold this mutex to block sysmon from interacting
with the rest of the runtime.
sysmonnote note
sysmonwait atomic.Bool
timeToRun is a distribution of scheduling latencies, defined
as the sum of time a G spends in the _Grunnable state before
it transitions to _Grunning.
totalMutexWaitTime is the sum of time goroutines have spent in _Gwaiting
with a waitreason of the form waitReasonSync{RW,}Mutex{R,}Lock.
// ∫gomaxprocs dt up to procresizetime
var sched
These values must match ../reflect/value.go:/SelectDir.
const selectDefault
const selectRecv
const selectSend
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason)
const semaBlockProfile
const semaMutexProfile
A semaRoot holds a balanced tree of sudog with distinct addresses (s.elem).
Each of those sudog may in turn point (through s.waitlink) to a list
of other sudogs waiting on the same address.
The operations on the inner lists of sudogs with the same address
are all O(1). The scanning of the top-level semaRoot list is O(log n),
where n is the number of distinct addresses with goroutines blocked
on them that hash to the given semaRoot.
See golang.org/issue/17953 for a program that worked badly
before we introduced the second level of list, and
BenchmarkSemTable/OneAddrCollision/* for a benchmark that exercises this.
lock mutex
// Number of waiters. Read w/o the lock.
// root of balanced tree of unique waiters.
dequeue searches for and finds the first goroutine
in semaRoot blocked on addr.
If the sudog was being profiled, dequeue returns the time
at which it was woken up as now. Otherwise now is 0.
queue adds s to the blocked goroutines in semaRoot.
rotateLeft rotates the tree rooted at node x.
turning (x a (y b c)) into (y (x a b) c).
rotateRight rotates the tree rooted at node y.
turning (y (x a b) c) into (x a (y b c)).
sa_flags uint64
sa_handler uintptr
sa_mask uint64
sa_restorer uintptr
func callCgoSigaction(sig uintptr, new, old *sigactiont) int32
func rt_sigaction(sig uintptr, new, old *sigactiont, size uintptr) int32
func sigaction(sig uint32, new, old *sigactiont)
func sysSigaction(sig uint32, new, old *sigactiont)
__pad0 uint16
__reserved1 [8]uint64
cr2 uint64
cs uint16
eflags uint64
err uint64
fpstate *fpstate1
fs uint16
gs uint16
oldmask uint64
r10 uint64
r11 uint64
r12 uint64
r13 uint64
r14 uint64
r15 uint64
r8 uint64
r9 uint64
rax uint64
rbp uint64
rbx uint64
rcx uint64
rdi uint64
rdx uint64
rip uint64
rsi uint64
rsp uint64
trapno uint64
ctxt unsafe.Pointer
info *siginfo
(*sigctxt) cs() uint64
(*sigctxt) fault() uintptr
(*sigctxt) fixsigcode(sig uint32)
(*sigctxt) fs() uint64
(*sigctxt) gs() uint64
preparePanic sets up the stack to look like a call to sigpanic.
(*sigctxt) pushCall(targetPC, resumePC uintptr)
(*sigctxt) r10() uint64
(*sigctxt) r11() uint64
(*sigctxt) r12() uint64
(*sigctxt) r13() uint64
(*sigctxt) r14() uint64
(*sigctxt) r15() uint64
(*sigctxt) r8() uint64
(*sigctxt) r9() uint64
(*sigctxt) rax() uint64
(*sigctxt) rbp() uint64
(*sigctxt) rbx() uint64
(*sigctxt) rcx() uint64
(*sigctxt) rdi() uint64
(*sigctxt) rdx() uint64
(*sigctxt) regs() *sigcontext
(*sigctxt) rflags() uint64
(*sigctxt) rip() uint64
(*sigctxt) rsi() uint64
(*sigctxt) rsp() uint64
(*sigctxt) set_rip(x uint64)
(*sigctxt) set_rsp(x uint64)
(*sigctxt) set_sigaddr(x uint64)
(*sigctxt) set_sigcode(x uint64)
(*sigctxt) setsigpc(x uint64)
sigFromUser reports whether the signal was sent because of a call
to kill or tgkill.
(*sigctxt) sigaddr() uint64
(*sigctxt) sigcode() uint64
(*sigctxt) siglr() uintptr
(*sigctxt) sigpc() uintptr
(*sigctxt) sigsp() uintptr
func badsignal(sig uintptr, c *sigctxt)
func doSigPreempt(gp *g, ctxt *sigctxt)
func dumpregs(c *sigctxt)
func fatalsignal(sig uint32, c *sigctxt, gp *g, mp *m) *g
func raisebadsignal(sig uint32, c *sigctxt)
func sigFetchG(c *sigctxt) *g
func validSIGPROF(mp *m, c *sigctxt) bool
sigeventFields sigeventFields
sigeventFields.notify int32
below here is a union; sigev_notify_thread_id is the only field we use
sigeventFields.signo int32
sigeventFields.value uintptr
func timer_create(clockid int32, sevp *sigevent, timerid *int32) int32
notify int32
below here is a union; sigev_notify_thread_id is the only field we use
signo int32
value uintptr
siginfoFields siginfoFields
below here is a union; si_addr is the only field we use
siginfoFields.si_code int32
siginfoFields.si_errno int32
siginfoFields.si_signo int32
func sigfwd(fn uintptr, sig uint32, info *siginfo, ctx unsafe.Pointer)
func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool
func sighandler(sig uint32, info *siginfo, ctxt unsafe.Pointer, gp *g)
func sigprofNonGo(sig uint32, info *siginfo, ctx unsafe.Pointer)
func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer)
It's hard to tease out exactly how big a Sigset is, but
rt_sigprocmask crashes if we get it wrong, so if binaries
are running, this is right.
func msigrestore(sigmask sigset)
func rtsigprocmask(how int32, new, old *sigset, size int32)
func sigaddset(mask *sigset, i int)
func sigdelset(mask *sigset, i int)
func sigprocmask(how int32, new, old *sigset)
func sigsave(p *sigset)
var initSigmask
var sigset_all
var sigsetAllExiting
sigTabT is the type of an entry in the global sigtable array.
sigtable is inherently system dependent, and appears in OS-specific files,
but sigTabT is the same for all Unixy systems.
The sigtable array is indexed by a system signal number to get the flags
and printable name of each signal.
flags int32
name string
array unsafe.Pointer
cap int
len int
func growslice(oldPtr unsafe.Pointer, newLen, oldCap, num int, et *_type) slice
func reflect_growslice(et *_type, old slice, num int) slice
func copyKeys(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func copyValues(t *maptype, h *hmap, b *bmap, s *slice, offset uint8)
func reflect_growslice(et *_type, old slice, num int) slice
func reflect_typedslicecopy(elemType *_type, dst, src slice) int
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
spanAllocType represents the type of allocation to make, or
the type of allocation to be freed.
manual returns true if the span allocation is manually managed.
const spanAllocHeap
const spanAllocPtrScalarBits
const spanAllocStack
const spanAllocWorkBuf
A spanClass represents the size class and noscan-ness of a span.
Each size class has a noscan spanClass and a scan spanClass. The
noscan spanClass contains only noscan objects, which do not contain
pointers and thus do not need to be scanned by the garbage
collector.
( spanClass) noscan() bool
( spanClass) sizeclass() int8
func makeSpanClass(sizeclass uint8, noscan bool) spanClass
const tinySpanClass
A spanSet is a set of *mspans.
spanSet is safe for concurrent push and pop operations.
index is the head and tail of the spanSet in a single field.
The head and the tail both represent an index into the logical
concatenation of all blocks, with the head always behind or
equal to the tail (indicating an empty set). This field is
always accessed atomically.
The head and the tail are only 32 bits wide, which means we
can only support up to 2^32 pushes before a reset. If every
span in the heap were stored in this set, and each span were
the minimum size (1 runtime page, 8 KiB), then roughly the
smallest heap which would be unrepresentable is 32 TiB in size.
// *[N]atomic.Pointer[spanSetBlock]
// Spine array cap, accessed under spineLock
// Spine array length
spineLock mutex
pop removes and returns a span from buffer b, or nil if b is empty.
pop is safe to call concurrently with other pop and push operations.
push adds span s to buffer b. push is safe to call concurrently
with other push and pop operations.
reset resets a spanSet which is empty. It will also clean up
any left over blocks.
Throws if the buf is not empty.
reset may not be called concurrently with any other operations
on the span set.
Free spanSetBlocks are managed via a lock-free stack.
lfnode.next uint64
lfnode.pushcnt uintptr
popped is the number of pop operations that have occurred on
this block. This number is used to help determine when a block
may be safely recycled.
spans is the set of spans in this block.
spanSetBlockAlloc represents a concurrent pool of spanSetBlocks.
stack lfstack
alloc tries to grab a spanSetBlock out of the pool, and if it fails
persistentallocs a new one and returns it.
free returns a spanSetBlock back to the pool.
var spanSetBlockPool
spanSetSpinePointer represents a pointer to a contiguous block of atomic.Pointer[spanSetBlock].
p unsafe.Pointer
lookup returns &s[idx].
// kind of special
// linked list in span
// span offset of object
func removespecial(p unsafe.Pointer, kind uint8) *special
func addspecial(p unsafe.Pointer, s *special) bool
func freeSpecial(s *special, p unsafe.Pointer, size uintptr)
The described object has a finalizer set for it.
specialfinalizer is allocated from non-GC'd memory, so any heap
pointers must be specially handled.
// May be a heap pointer, but always live.
// May be a heap pointer.
nret uintptr
// May be a heap pointer, but always live.
special special
specialPinCounter tracks whether an object is pinned multiple times.
counter uintptr
special special
specialReachable tracks whether an object is reachable on the next
GC cycle. This is used by testing.
done bool
reachable bool
special special
specialsIter helps iterate over specials lists.
pprev **special
s *special
(*specialsIter) next()
unlinkAndNext removes the current special from the list and moves
the iterator to the next special. It returns the unlinked special.
(*specialsIter) valid() bool
func newSpecialsIter(span *mspan) specialsIter
A srcFunc represents a logical function in the source code. This may
correspond to an actual symbol in the binary text, or it may correspond to a
source function that has been inlined.
datap *moduledata
funcID abi.FuncID
nameOff int32
startLine int32
( srcFunc) name() string
func showframe(sf srcFunc, gp *g, firstFrame bool, calleeID abi.FuncID) bool
func showfuncinfo(sf srcFunc, firstFrame bool, calleeID abi.FuncID) bool
Stack describes a Go execution stack.
The bounds of the stack are exactly [lo, hi),
with no implicit data structures on either side.
hi uintptr
lo uintptr
func stackalloc(n uint32) stack
func fillstack(stk stack, b byte)
func findsghi(gp *g, stk stack) uintptr
func signalstack(s *stack)
func stackfree(stk stack)
func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
// linked list of free stacks
// total size of stacks in list
// bitmaps, each starting on a byte boundary
// number of bitmaps
// number of bits in each bitmap
func stackmapdata(stkmap *stackmap, n int32) bitvector
A stackObject represents a variable on the stack that has had
its address taken.
// objects with lower addresses
// offset above stack.lo
// info of the object (for ptr/nonptr bits). nil if object has been scanned.
// objects with higher addresses
// size of object
obj.r = r, but with no write barrier.
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
Buffer for stack objects found on a goroutine stack.
Must be smaller than or equal to workbuf.
obj [63]stackObject
stackObjectBufHdr stackObjectBufHdr
stackObjectBufHdr.next *stackObjectBuf
stackObjectBufHdr.workbufhdr workbufhdr
stackObjectBufHdr.workbufhdr.nobj int
// must be first
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
func binarySearchTree(x *stackObjectBuf, idx int, n int) (root *stackObject, restBuf *stackObjectBuf, restIdx int)
next *stackObjectBuf
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
A stackObjectRecord is generated by the compiler for each stack object in a stack frame.
This record must match the generator code in cmd/compile/internal/liveness/plive.go:emitStackObjects.
// ptrdata, or -ptrdata is GC prog is used
// offset to gcdata from moduledata.rodata
offset in frame
if negative, offset from varp
if non-negative, offset from argp
size int32
gcdata returns pointer map or GC prog of the type.
(*stackObjectRecord) ptrdata() uintptr
(*stackObjectRecord) useGCProg() bool
A stackScanState keeps track of the state used during the GC walk
of a goroutine.
buf contains the set of possible pointers to stack objects.
Organized as a LIFO linked list of buffers.
All buffers except possibly the head buffer are full.
cache pcvalueCache
cbuf contains conservative pointers to stack objects. If
all pointers to a stack object are obtained via
conservative scanning, then the stack object may be dead
and may contain dead pointers, so it must be scanned
defensively.
conservative indicates that the next frame must be scanned conservatively.
This applies only to the innermost frame at an async safe-point.
// keep around one free buffer for allocation hysteresis
list of stack objects
Objects are in increasing address order.
nobjs int
root of binary tree for fast object lookup by address
Initialized by buildIndex.
stack limits
tail *stackObjectBuf
addObject adds a stack object at addr of type typ to the set of stack objects.
buildIndex initializes s.root to a binary search tree.
It should be called after all addObject calls but before
any call of findObject.
findObject returns the stack object containing address a, if any.
Must have called buildIndex previously.
Remove and return a potential pointer to a stack object.
Returns 0 if there are no more pointers available.
This prefers non-conservative pointers so we scan stack objects
precisely if there are any non-conservative pointers to them.
Add p as a potential pointer to a stack object.
p must be a stack address.
func scanblock(b0, n0 uintptr, ptrmask *uint8, gcw *gcWork, stk *stackScanState)
func scanConservative(b, n uintptr, ptrmask *uint8, gcw *gcWork, state *stackScanState)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
pad_cgo_0 [4]byte
ss_flags int32
ss_size uintptr
ss_sp *byte
func setGsignalStack(st *stackt, old *gsignalStack)
func setSignalstackSP(s *stackt, sp uintptr)
func sigaltstack(new, old *stackt)
Buffer for pointers found during stack tracing.
Must be smaller than or equal to workbuf.
obj [252]uintptr
stackWorkBufHdr stackWorkBufHdr
// linked list of workbufs
stackWorkBufHdr.workbufhdr workbufhdr
stackWorkBufHdr.workbufhdr.nobj int
// must be first
Header declaration must come after the buf declaration above, because of issue #14620.
// linked list of workbufs
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
statAggregate is the main driver of the metrics implementation.
It contains multiple aggregates of runtime statistics, as well
as a set of these aggregates that it has populated. The aggregates
are populated lazily by its ensure method.
cpuStats cpuStatsAggregate
ensured statDepSet
gcStats gcStatsAggregate
heapStats heapStatsAggregate
sysStats sysStatsAggregate
ensure populates statistics aggregates determined by deps if they
haven't yet been populated.
func compute0(_ *statAggregate, out *metricValue)
var agg
statDep is a dependency on a group of statistics
that a metric might have.
func makeStatDepSet(deps ...statDep) statDepSet
const cpuStatsDep
const gcStatsDep
const heapStatsDep
const numStatsDeps
const sysStatsDep
statDepSet represents a set of statDeps.
Under the hood, it's a bitmap.
difference returns set difference of s from b as a new set.
empty returns true if there are no dependencies in the set.
has returns true if the set contains a given statDep.
union returns the union of the two sets as a new set.
func makeStatDepSet(deps ...statDep) statDepSet
A stkframe holds information about a single physical stack frame.
// pointer to function arguments
continpc is the PC where execution will continue in fn, or
0 if execution will not continue in this frame.
This is usually the same as pc, unless this frame "called"
sigpanic, in which case it's either the address of
deferreturn or 0 if this frame will never execute again.
This is the PC to use to look up GC liveness for this frame.
fn is the function being run in this frame. If there is
inlining, this is the outermost function.
// stack pointer at caller aka frame pointer
// program counter at caller aka link register
pc is the program counter within fn.
The meaning of this is subtle:
- Typically, this frame performed a regular function call
and this is the return PC (just after the CALL
instruction). In this case, pc-1 reflects the CALL
instruction itself and is the correct source of symbolic
information.
- If this frame "called" sigpanic, then pc is the
instruction that panicked, and pc is the correct address
to use for symbolic information.
- If this is the innermost frame, then PC is where
execution will continue, but it may not be the
instruction following a CALL. This may be from
cooperative preemption, in which case this is the
instruction after the call to morestack. Or this may be
from a signal or an un-started goroutine, in which case
PC could be any instruction, including the first
instruction in a function. Conventionally, we use pc-1
for symbolic information, unless pc == fn.entry(), in
which case we use pc.
// stack pointer at pc
// top of local variables
argBytes returns the argument frame size for a call to frame.fn.
argMapInternal is used internally by stkframe to fetch special
argument maps.
argMap.n is always populated with the size of the argument map.
argMap.bytedata is only populated for dynamic argument maps (used
by reflect). If the caller requires the argument map, it should use
this if non-nil, and otherwise fetch the argument map using the
current PC.
hasReflectStackObj indicates that this frame also has a reflect
function stack object, which the caller must synthesize.
getStackMap returns the locals and arguments live pointer maps, and
stack object list for frame.
func adjustframe(frame *stkframe, adjinfo *adjustinfo)
func dumpframe(s *stkframe, child *childInfo)
func scanframeworker(frame *stkframe, state *stackScanState, gcw *gcWork)
func tracebackHexdump(stk stack, frame *stkframe, bad uintptr)
( stringer) String() string
*runtime/debug.BuildInfo
*bytes.Buffer
crypto.Hash
crypto/tls.ClientAuthType
crypto/tls.CurveID
crypto/tls.QUICEncryptionLevel
crypto/tls.SignatureScheme
crypto/x509.PublicKeyAlgorithm
crypto/x509.SignatureAlgorithm
crypto/x509/pkix.Name
crypto/x509/pkix.RDNSequence
encoding/asn1.ObjectIdentifier
encoding/binary.AppendByteOrder (interface)
encoding/binary.ByteOrder (interface)
encoding/json.Delim
encoding/json.Number
flag.Getter (interface)
flag.Value (interface)
fmt.Stringer (interface)
github.com/go-faster/jx.Encoder
github.com/go-faster/jx.Num
github.com/go-faster/jx.Raw
github.com/go-faster/jx.Type
github.com/go-faster/jx.Writer
github.com/gotd/neo.NetAddr
github.com/gotd/td/bin.Fields
github.com/gotd/td/internal/crypto.AuthKey
github.com/gotd/td/internal/crypto.Key
*github.com/gotd/td/internal/mt.BadMsgNotification
github.com/gotd/td/internal/mt.BadMsgNotificationClass (interface)
*github.com/gotd/td/internal/mt.BadServerSalt
*github.com/gotd/td/internal/mt.ClientDHInnerData
*github.com/gotd/td/internal/mt.DestroySessionNone
*github.com/gotd/td/internal/mt.DestroySessionOk
*github.com/gotd/td/internal/mt.DestroySessionRequest
github.com/gotd/td/internal/mt.DestroySessionResClass (interface)
*github.com/gotd/td/internal/mt.DhGenFail
*github.com/gotd/td/internal/mt.DhGenOk
*github.com/gotd/td/internal/mt.DhGenRetry
*github.com/gotd/td/internal/mt.FutureSalt
*github.com/gotd/td/internal/mt.FutureSalts
*github.com/gotd/td/internal/mt.GetFutureSaltsRequest
*github.com/gotd/td/internal/mt.GzipPacked
*github.com/gotd/td/internal/mt.HTTPWaitRequest
*github.com/gotd/td/internal/mt.Message
*github.com/gotd/td/internal/mt.MsgContainer
*github.com/gotd/td/internal/mt.MsgCopy
*github.com/gotd/td/internal/mt.MsgDetailedInfo
github.com/gotd/td/internal/mt.MsgDetailedInfoClass (interface)
*github.com/gotd/td/internal/mt.MsgNewDetailedInfo
*github.com/gotd/td/internal/mt.MsgResendReq
*github.com/gotd/td/internal/mt.MsgsAck
*github.com/gotd/td/internal/mt.MsgsAllInfo
*github.com/gotd/td/internal/mt.MsgsStateInfo
*github.com/gotd/td/internal/mt.MsgsStateReq
*github.com/gotd/td/internal/mt.NewSessionCreated
*github.com/gotd/td/internal/mt.PingDelayDisconnectRequest
*github.com/gotd/td/internal/mt.PingRequest
*github.com/gotd/td/internal/mt.Pong
*github.com/gotd/td/internal/mt.PQInnerData
github.com/gotd/td/internal/mt.PQInnerDataClass (interface)
*github.com/gotd/td/internal/mt.PQInnerDataDC
*github.com/gotd/td/internal/mt.PQInnerDataTempDC
*github.com/gotd/td/internal/mt.ReqDHParamsRequest
*github.com/gotd/td/internal/mt.ReqPqMultiRequest
*github.com/gotd/td/internal/mt.ReqPqRequest
*github.com/gotd/td/internal/mt.ResPQ
*github.com/gotd/td/internal/mt.RPCAnswerDropped
*github.com/gotd/td/internal/mt.RPCAnswerDroppedRunning
*github.com/gotd/td/internal/mt.RPCAnswerUnknown
github.com/gotd/td/internal/mt.RPCDropAnswerClass (interface)
*github.com/gotd/td/internal/mt.RPCDropAnswerRequest
*github.com/gotd/td/internal/mt.RPCError
*github.com/gotd/td/internal/mt.RPCResult
*github.com/gotd/td/internal/mt.ServerDHInnerData
github.com/gotd/td/internal/mt.ServerDHParamsClass (interface)
*github.com/gotd/td/internal/mt.ServerDHParamsFail
*github.com/gotd/td/internal/mt.ServerDHParamsOk
github.com/gotd/td/internal/mt.SetClientDHParamsAnswerClass (interface)
*github.com/gotd/td/internal/mt.SetClientDHParamsRequest
github.com/gotd/td/internal/proto.MessageID
github.com/gotd/td/internal/proto.MessageType
github.com/gotd/td/session/tdesktop.MTPConfigEnvironment
github.com/gotd/td/tdjson.Encoder
github.com/gotd/td/telegram/auth/qrlogin.Token
github.com/gotd/td/telegram/internal/manager.ConnMode
*github.com/gotd/td/tg.AccessPointRule
*github.com/gotd/td/tg.AccountAcceptAuthorizationRequest
*github.com/gotd/td/tg.AccountAuthorizationForm
*github.com/gotd/td/tg.AccountAuthorizations
*github.com/gotd/td/tg.AccountAutoDownloadSettings
*github.com/gotd/td/tg.AccountAutoSaveSettings
*github.com/gotd/td/tg.AccountCancelPasswordEmailRequest
*github.com/gotd/td/tg.AccountChangeAuthorizationSettingsRequest
*github.com/gotd/td/tg.AccountChangePhoneRequest
*github.com/gotd/td/tg.AccountCheckUsernameRequest
*github.com/gotd/td/tg.AccountClearRecentEmojiStatusesRequest
*github.com/gotd/td/tg.AccountConfirmPasswordEmailRequest
*github.com/gotd/td/tg.AccountConfirmPhoneRequest
*github.com/gotd/td/tg.AccountContentSettings
*github.com/gotd/td/tg.AccountCreateThemeRequest
*github.com/gotd/td/tg.AccountDaysTTL
*github.com/gotd/td/tg.AccountDeclinePasswordResetRequest
*github.com/gotd/td/tg.AccountDeleteAccountRequest
*github.com/gotd/td/tg.AccountDeleteAutoSaveExceptionsRequest
*github.com/gotd/td/tg.AccountDeleteSecureValueRequest
*github.com/gotd/td/tg.AccountEmailVerified
github.com/gotd/td/tg.AccountEmailVerifiedClass (interface)
*github.com/gotd/td/tg.AccountEmailVerifiedLogin
*github.com/gotd/td/tg.AccountEmojiStatuses
github.com/gotd/td/tg.AccountEmojiStatusesClass (interface)
*github.com/gotd/td/tg.AccountEmojiStatusesNotModified
*github.com/gotd/td/tg.AccountFinishTakeoutSessionRequest
*github.com/gotd/td/tg.AccountGetAccountTTLRequest
*github.com/gotd/td/tg.AccountGetAllSecureValuesRequest
*github.com/gotd/td/tg.AccountGetAuthorizationFormRequest
*github.com/gotd/td/tg.AccountGetAuthorizationsRequest
*github.com/gotd/td/tg.AccountGetAutoDownloadSettingsRequest
*github.com/gotd/td/tg.AccountGetAutoSaveSettingsRequest
*github.com/gotd/td/tg.AccountGetChannelDefaultEmojiStatusesRequest
*github.com/gotd/td/tg.AccountGetChannelRestrictedStatusEmojisRequest
*github.com/gotd/td/tg.AccountGetChatThemesRequest
*github.com/gotd/td/tg.AccountGetContactSignUpNotificationRequest
*github.com/gotd/td/tg.AccountGetContentSettingsRequest
*github.com/gotd/td/tg.AccountGetDefaultBackgroundEmojisRequest
*github.com/gotd/td/tg.AccountGetDefaultEmojiStatusesRequest
*github.com/gotd/td/tg.AccountGetDefaultGroupPhotoEmojisRequest
*github.com/gotd/td/tg.AccountGetDefaultProfilePhotoEmojisRequest
*github.com/gotd/td/tg.AccountGetGlobalPrivacySettingsRequest
*github.com/gotd/td/tg.AccountGetMultiWallPapersRequest
*github.com/gotd/td/tg.AccountGetNotifyExceptionsRequest
*github.com/gotd/td/tg.AccountGetNotifySettingsRequest
*github.com/gotd/td/tg.AccountGetPasswordRequest
*github.com/gotd/td/tg.AccountGetPasswordSettingsRequest
*github.com/gotd/td/tg.AccountGetPrivacyRequest
*github.com/gotd/td/tg.AccountGetRecentEmojiStatusesRequest
*github.com/gotd/td/tg.AccountGetSavedRingtonesRequest
*github.com/gotd/td/tg.AccountGetSecureValueRequest
*github.com/gotd/td/tg.AccountGetThemeRequest
*github.com/gotd/td/tg.AccountGetThemesRequest
*github.com/gotd/td/tg.AccountGetTmpPasswordRequest
*github.com/gotd/td/tg.AccountGetWallPaperRequest
*github.com/gotd/td/tg.AccountGetWallPapersRequest
*github.com/gotd/td/tg.AccountGetWebAuthorizationsRequest
*github.com/gotd/td/tg.AccountInitTakeoutSessionRequest
*github.com/gotd/td/tg.AccountInstallThemeRequest
*github.com/gotd/td/tg.AccountInstallWallPaperRequest
*github.com/gotd/td/tg.AccountInvalidateSignInCodesRequest
*github.com/gotd/td/tg.AccountPassword
*github.com/gotd/td/tg.AccountPasswordInputSettings
*github.com/gotd/td/tg.AccountPasswordSettings
*github.com/gotd/td/tg.AccountPrivacyRules
*github.com/gotd/td/tg.AccountRegisterDeviceRequest
*github.com/gotd/td/tg.AccountReorderUsernamesRequest
*github.com/gotd/td/tg.AccountReportPeerRequest
*github.com/gotd/td/tg.AccountReportProfilePhotoRequest
*github.com/gotd/td/tg.AccountResendPasswordEmailRequest
*github.com/gotd/td/tg.AccountResetAuthorizationRequest
*github.com/gotd/td/tg.AccountResetNotifySettingsRequest
*github.com/gotd/td/tg.AccountResetPasswordFailedWait
*github.com/gotd/td/tg.AccountResetPasswordOk
*github.com/gotd/td/tg.AccountResetPasswordRequest
*github.com/gotd/td/tg.AccountResetPasswordRequestedWait
github.com/gotd/td/tg.AccountResetPasswordResultClass (interface)
*github.com/gotd/td/tg.AccountResetWallPapersRequest
*github.com/gotd/td/tg.AccountResetWebAuthorizationRequest
*github.com/gotd/td/tg.AccountResetWebAuthorizationsRequest
*github.com/gotd/td/tg.AccountSaveAutoDownloadSettingsRequest
*github.com/gotd/td/tg.AccountSaveAutoSaveSettingsRequest
*github.com/gotd/td/tg.AccountSavedRingtone
github.com/gotd/td/tg.AccountSavedRingtoneClass (interface)
*github.com/gotd/td/tg.AccountSavedRingtoneConverted
*github.com/gotd/td/tg.AccountSavedRingtones
github.com/gotd/td/tg.AccountSavedRingtonesClass (interface)
*github.com/gotd/td/tg.AccountSavedRingtonesNotModified
*github.com/gotd/td/tg.AccountSaveRingtoneRequest
*github.com/gotd/td/tg.AccountSaveSecureValueRequest
*github.com/gotd/td/tg.AccountSaveThemeRequest
*github.com/gotd/td/tg.AccountSaveWallPaperRequest
*github.com/gotd/td/tg.AccountSendChangePhoneCodeRequest
*github.com/gotd/td/tg.AccountSendConfirmPhoneCodeRequest
*github.com/gotd/td/tg.AccountSendVerifyEmailCodeRequest
*github.com/gotd/td/tg.AccountSendVerifyPhoneCodeRequest
*github.com/gotd/td/tg.AccountSentEmailCode
*github.com/gotd/td/tg.AccountSetAccountTTLRequest
*github.com/gotd/td/tg.AccountSetAuthorizationTTLRequest
*github.com/gotd/td/tg.AccountSetContactSignUpNotificationRequest
*github.com/gotd/td/tg.AccountSetContentSettingsRequest
*github.com/gotd/td/tg.AccountSetGlobalPrivacySettingsRequest
*github.com/gotd/td/tg.AccountSetPrivacyRequest
*github.com/gotd/td/tg.AccountTakeout
*github.com/gotd/td/tg.AccountThemes
github.com/gotd/td/tg.AccountThemesClass (interface)
*github.com/gotd/td/tg.AccountThemesNotModified
*github.com/gotd/td/tg.AccountTmpPassword
*github.com/gotd/td/tg.AccountToggleUsernameRequest
*github.com/gotd/td/tg.AccountUnregisterDeviceRequest
*github.com/gotd/td/tg.AccountUpdateColorRequest
*github.com/gotd/td/tg.AccountUpdateDeviceLockedRequest
*github.com/gotd/td/tg.AccountUpdateEmojiStatusRequest
*github.com/gotd/td/tg.AccountUpdateNotifySettingsRequest
*github.com/gotd/td/tg.AccountUpdatePasswordSettingsRequest
*github.com/gotd/td/tg.AccountUpdateProfileRequest
*github.com/gotd/td/tg.AccountUpdateStatusRequest
*github.com/gotd/td/tg.AccountUpdateThemeRequest
*github.com/gotd/td/tg.AccountUpdateUsernameRequest
*github.com/gotd/td/tg.AccountUploadRingtoneRequest
*github.com/gotd/td/tg.AccountUploadThemeRequest
*github.com/gotd/td/tg.AccountUploadWallPaperRequest
*github.com/gotd/td/tg.AccountVerifyEmailRequest
*github.com/gotd/td/tg.AccountVerifyPhoneRequest
*github.com/gotd/td/tg.AccountWallPapers
github.com/gotd/td/tg.AccountWallPapersClass (interface)
*github.com/gotd/td/tg.AccountWallPapersNotModified
*github.com/gotd/td/tg.AccountWebAuthorizations
*github.com/gotd/td/tg.AppWebViewResultURL
*github.com/gotd/td/tg.AttachMenuBot
*github.com/gotd/td/tg.AttachMenuBotIcon
*github.com/gotd/td/tg.AttachMenuBotIconColor
*github.com/gotd/td/tg.AttachMenuBots
*github.com/gotd/td/tg.AttachMenuBotsBot
github.com/gotd/td/tg.AttachMenuBotsClass (interface)
*github.com/gotd/td/tg.AttachMenuBotsNotModified
*github.com/gotd/td/tg.AttachMenuPeerTypeBotPM
*github.com/gotd/td/tg.AttachMenuPeerTypeBroadcast
*github.com/gotd/td/tg.AttachMenuPeerTypeChat
github.com/gotd/td/tg.AttachMenuPeerTypeClass (interface)
*github.com/gotd/td/tg.AttachMenuPeerTypePM
*github.com/gotd/td/tg.AttachMenuPeerTypeSameBotPM
*github.com/gotd/td/tg.AuthAcceptLoginTokenRequest
*github.com/gotd/td/tg.AuthAuthorization
github.com/gotd/td/tg.AuthAuthorizationClass (interface)
*github.com/gotd/td/tg.AuthAuthorizationSignUpRequired
*github.com/gotd/td/tg.AuthBindTempAuthKeyRequest
*github.com/gotd/td/tg.AuthCancelCodeRequest
*github.com/gotd/td/tg.AuthCheckPasswordRequest
*github.com/gotd/td/tg.AuthCheckRecoveryPasswordRequest
*github.com/gotd/td/tg.AuthCodeTypeCall
github.com/gotd/td/tg.AuthCodeTypeClass (interface)
*github.com/gotd/td/tg.AuthCodeTypeFlashCall
*github.com/gotd/td/tg.AuthCodeTypeFragmentSMS
*github.com/gotd/td/tg.AuthCodeTypeMissedCall
*github.com/gotd/td/tg.AuthCodeTypeSMS
*github.com/gotd/td/tg.AuthDropTempAuthKeysRequest
*github.com/gotd/td/tg.AuthExportAuthorizationRequest
*github.com/gotd/td/tg.AuthExportedAuthorization
*github.com/gotd/td/tg.AuthExportLoginTokenRequest
*github.com/gotd/td/tg.AuthImportAuthorizationRequest
*github.com/gotd/td/tg.AuthImportBotAuthorizationRequest
*github.com/gotd/td/tg.AuthImportLoginTokenRequest
*github.com/gotd/td/tg.AuthImportWebTokenAuthorizationRequest
*github.com/gotd/td/tg.AuthLoggedOut
*github.com/gotd/td/tg.AuthLoginToken
github.com/gotd/td/tg.AuthLoginTokenClass (interface)
*github.com/gotd/td/tg.AuthLoginTokenMigrateTo
*github.com/gotd/td/tg.AuthLoginTokenSuccess
*github.com/gotd/td/tg.AuthLogOutRequest
*github.com/gotd/td/tg.Authorization
*github.com/gotd/td/tg.AuthPasswordRecovery
*github.com/gotd/td/tg.AuthRecoverPasswordRequest
*github.com/gotd/td/tg.AuthRequestFirebaseSMSRequest
*github.com/gotd/td/tg.AuthRequestPasswordRecoveryRequest
*github.com/gotd/td/tg.AuthResendCodeRequest
*github.com/gotd/td/tg.AuthResetAuthorizationsRequest
*github.com/gotd/td/tg.AuthResetLoginEmailRequest
*github.com/gotd/td/tg.AuthSendCodeRequest
*github.com/gotd/td/tg.AuthSentCode
github.com/gotd/td/tg.AuthSentCodeClass (interface)
*github.com/gotd/td/tg.AuthSentCodeSuccess
*github.com/gotd/td/tg.AuthSentCodeTypeApp
*github.com/gotd/td/tg.AuthSentCodeTypeCall
github.com/gotd/td/tg.AuthSentCodeTypeClass (interface)
*github.com/gotd/td/tg.AuthSentCodeTypeEmailCode
*github.com/gotd/td/tg.AuthSentCodeTypeFirebaseSMS
*github.com/gotd/td/tg.AuthSentCodeTypeFlashCall
*github.com/gotd/td/tg.AuthSentCodeTypeFragmentSMS
*github.com/gotd/td/tg.AuthSentCodeTypeMissedCall
*github.com/gotd/td/tg.AuthSentCodeTypeSetUpEmailRequired
*github.com/gotd/td/tg.AuthSentCodeTypeSMS
*github.com/gotd/td/tg.AuthSignInRequest
*github.com/gotd/td/tg.AuthSignUpRequest
*github.com/gotd/td/tg.AutoDownloadSettings
*github.com/gotd/td/tg.AutoSaveException
*github.com/gotd/td/tg.AutoSaveSettings
*github.com/gotd/td/tg.AvailableReaction
*github.com/gotd/td/tg.BankCardOpenURL
*github.com/gotd/td/tg.BaseThemeArctic
github.com/gotd/td/tg.BaseThemeClass (interface)
*github.com/gotd/td/tg.BaseThemeClassic
*github.com/gotd/td/tg.BaseThemeDay
*github.com/gotd/td/tg.BaseThemeNight
*github.com/gotd/td/tg.BaseThemeTinted
github.com/gotd/td/tg.BoolClass (interface)
*github.com/gotd/td/tg.BoolFalse
*github.com/gotd/td/tg.BoolTrue
*github.com/gotd/td/tg.Boost
*github.com/gotd/td/tg.BotApp
github.com/gotd/td/tg.BotAppClass (interface)
*github.com/gotd/td/tg.BotAppNotModified
*github.com/gotd/td/tg.BotCommand
*github.com/gotd/td/tg.BotCommandScopeChatAdmins
*github.com/gotd/td/tg.BotCommandScopeChats
github.com/gotd/td/tg.BotCommandScopeClass (interface)
*github.com/gotd/td/tg.BotCommandScopeDefault
*github.com/gotd/td/tg.BotCommandScopePeer
*github.com/gotd/td/tg.BotCommandScopePeerAdmins
*github.com/gotd/td/tg.BotCommandScopePeerUser
*github.com/gotd/td/tg.BotCommandScopeUsers
*github.com/gotd/td/tg.BotCommandVector
*github.com/gotd/td/tg.BotInfo
*github.com/gotd/td/tg.BotInlineMediaResult
github.com/gotd/td/tg.BotInlineMessageClass (interface)
*github.com/gotd/td/tg.BotInlineMessageMediaAuto
*github.com/gotd/td/tg.BotInlineMessageMediaContact
*github.com/gotd/td/tg.BotInlineMessageMediaGeo
*github.com/gotd/td/tg.BotInlineMessageMediaInvoice
*github.com/gotd/td/tg.BotInlineMessageMediaVenue
*github.com/gotd/td/tg.BotInlineMessageMediaWebPage
*github.com/gotd/td/tg.BotInlineMessageText
*github.com/gotd/td/tg.BotInlineResult
github.com/gotd/td/tg.BotInlineResultClass (interface)
*github.com/gotd/td/tg.BotMenuButton
github.com/gotd/td/tg.BotMenuButtonClass (interface)
*github.com/gotd/td/tg.BotMenuButtonCommands
*github.com/gotd/td/tg.BotMenuButtonDefault
*github.com/gotd/td/tg.BotsAllowSendMessageRequest
*github.com/gotd/td/tg.BotsAnswerWebhookJSONQueryRequest
*github.com/gotd/td/tg.BotsBotInfo
*github.com/gotd/td/tg.BotsCanSendMessageRequest
*github.com/gotd/td/tg.BotsGetBotCommandsRequest
*github.com/gotd/td/tg.BotsGetBotInfoRequest
*github.com/gotd/td/tg.BotsGetBotMenuButtonRequest
*github.com/gotd/td/tg.BotsInvokeWebViewCustomMethodRequest
*github.com/gotd/td/tg.BotsReorderUsernamesRequest
*github.com/gotd/td/tg.BotsResetBotCommandsRequest
*github.com/gotd/td/tg.BotsSendCustomRequestRequest
*github.com/gotd/td/tg.BotsSetBotBroadcastDefaultAdminRightsRequest
*github.com/gotd/td/tg.BotsSetBotCommandsRequest
*github.com/gotd/td/tg.BotsSetBotGroupDefaultAdminRightsRequest
*github.com/gotd/td/tg.BotsSetBotInfoRequest
*github.com/gotd/td/tg.BotsSetBotMenuButtonRequest
*github.com/gotd/td/tg.BotsToggleUsernameRequest
*github.com/gotd/td/tg.Bytes
*github.com/gotd/td/tg.CDNConfig
*github.com/gotd/td/tg.CDNPublicKey
*github.com/gotd/td/tg.Channel
*github.com/gotd/td/tg.ChannelAdminLogEvent
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeAbout
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeAvailableReactions
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeEmojiStatus
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeHistoryTTL
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeLinkedChat
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeLocation
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangePeerColor
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangePhoto
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeProfilePeerColor
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeStickerSet
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeTitle
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeUsername
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeUsernames
*github.com/gotd/td/tg.ChannelAdminLogEventActionChangeWallpaper
github.com/gotd/td/tg.ChannelAdminLogEventActionClass (interface)
*github.com/gotd/td/tg.ChannelAdminLogEventActionCreateTopic
*github.com/gotd/td/tg.ChannelAdminLogEventActionDefaultBannedRights
*github.com/gotd/td/tg.ChannelAdminLogEventActionDeleteMessage
*github.com/gotd/td/tg.ChannelAdminLogEventActionDeleteTopic
*github.com/gotd/td/tg.ChannelAdminLogEventActionDiscardGroupCall
*github.com/gotd/td/tg.ChannelAdminLogEventActionEditMessage
*github.com/gotd/td/tg.ChannelAdminLogEventActionEditTopic
*github.com/gotd/td/tg.ChannelAdminLogEventActionExportedInviteDelete
*github.com/gotd/td/tg.ChannelAdminLogEventActionExportedInviteEdit
*github.com/gotd/td/tg.ChannelAdminLogEventActionExportedInviteRevoke
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantInvite
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantJoin
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantJoinByInvite
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantJoinByRequest
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantLeave
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantMute
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantToggleAdmin
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantToggleBan
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantUnmute
*github.com/gotd/td/tg.ChannelAdminLogEventActionParticipantVolume
*github.com/gotd/td/tg.ChannelAdminLogEventActionPinTopic
*github.com/gotd/td/tg.ChannelAdminLogEventActionSendMessage
*github.com/gotd/td/tg.ChannelAdminLogEventActionStartGroupCall
*github.com/gotd/td/tg.ChannelAdminLogEventActionStopPoll
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleAntiSpam
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleForum
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleGroupCallSetting
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleInvites
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleNoForwards
*github.com/gotd/td/tg.ChannelAdminLogEventActionTogglePreHistoryHidden
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleSignatures
*github.com/gotd/td/tg.ChannelAdminLogEventActionToggleSlowMode
*github.com/gotd/td/tg.ChannelAdminLogEventActionUpdatePinned
*github.com/gotd/td/tg.ChannelAdminLogEventsFilter
*github.com/gotd/td/tg.ChannelForbidden
*github.com/gotd/td/tg.ChannelFull
*github.com/gotd/td/tg.ChannelLocation
github.com/gotd/td/tg.ChannelLocationClass (interface)
*github.com/gotd/td/tg.ChannelLocationEmpty
*github.com/gotd/td/tg.ChannelMessagesFilter
github.com/gotd/td/tg.ChannelMessagesFilterClass (interface)
*github.com/gotd/td/tg.ChannelMessagesFilterEmpty
*github.com/gotd/td/tg.ChannelParticipant
*github.com/gotd/td/tg.ChannelParticipantAdmin
*github.com/gotd/td/tg.ChannelParticipantBanned
github.com/gotd/td/tg.ChannelParticipantClass (interface)
*github.com/gotd/td/tg.ChannelParticipantCreator
*github.com/gotd/td/tg.ChannelParticipantLeft
*github.com/gotd/td/tg.ChannelParticipantSelf
*github.com/gotd/td/tg.ChannelParticipantsAdmins
*github.com/gotd/td/tg.ChannelParticipantsBanned
*github.com/gotd/td/tg.ChannelParticipantsBots
*github.com/gotd/td/tg.ChannelParticipantsContacts
github.com/gotd/td/tg.ChannelParticipantsFilterClass (interface)
*github.com/gotd/td/tg.ChannelParticipantsKicked
*github.com/gotd/td/tg.ChannelParticipantsMentions
*github.com/gotd/td/tg.ChannelParticipantsRecent
*github.com/gotd/td/tg.ChannelParticipantsSearch
*github.com/gotd/td/tg.ChannelsAdminLogResults
*github.com/gotd/td/tg.ChannelsChannelParticipant
*github.com/gotd/td/tg.ChannelsChannelParticipants
github.com/gotd/td/tg.ChannelsChannelParticipantsClass (interface)
*github.com/gotd/td/tg.ChannelsChannelParticipantsNotModified
*github.com/gotd/td/tg.ChannelsCheckUsernameRequest
*github.com/gotd/td/tg.ChannelsClickSponsoredMessageRequest
*github.com/gotd/td/tg.ChannelsConvertToGigagroupRequest
*github.com/gotd/td/tg.ChannelsCreateChannelRequest
*github.com/gotd/td/tg.ChannelsCreateForumTopicRequest
*github.com/gotd/td/tg.ChannelsDeactivateAllUsernamesRequest
*github.com/gotd/td/tg.ChannelsDeleteChannelRequest
*github.com/gotd/td/tg.ChannelsDeleteHistoryRequest
*github.com/gotd/td/tg.ChannelsDeleteMessagesRequest
*github.com/gotd/td/tg.ChannelsDeleteParticipantHistoryRequest
*github.com/gotd/td/tg.ChannelsDeleteTopicHistoryRequest
*github.com/gotd/td/tg.ChannelsEditAdminRequest
*github.com/gotd/td/tg.ChannelsEditBannedRequest
*github.com/gotd/td/tg.ChannelsEditCreatorRequest
*github.com/gotd/td/tg.ChannelsEditForumTopicRequest
*github.com/gotd/td/tg.ChannelsEditLocationRequest
*github.com/gotd/td/tg.ChannelsEditPhotoRequest
*github.com/gotd/td/tg.ChannelsEditTitleRequest
*github.com/gotd/td/tg.ChannelsExportMessageLinkRequest
*github.com/gotd/td/tg.ChannelsGetAdminedPublicChannelsRequest
*github.com/gotd/td/tg.ChannelsGetAdminLogRequest
*github.com/gotd/td/tg.ChannelsGetChannelRecommendationsRequest
*github.com/gotd/td/tg.ChannelsGetChannelsRequest
*github.com/gotd/td/tg.ChannelsGetForumTopicsByIDRequest
*github.com/gotd/td/tg.ChannelsGetForumTopicsRequest
*github.com/gotd/td/tg.ChannelsGetFullChannelRequest
*github.com/gotd/td/tg.ChannelsGetGroupsForDiscussionRequest
*github.com/gotd/td/tg.ChannelsGetInactiveChannelsRequest
*github.com/gotd/td/tg.ChannelsGetLeftChannelsRequest
*github.com/gotd/td/tg.ChannelsGetMessagesRequest
*github.com/gotd/td/tg.ChannelsGetParticipantRequest
*github.com/gotd/td/tg.ChannelsGetParticipantsRequest
*github.com/gotd/td/tg.ChannelsGetSendAsRequest
*github.com/gotd/td/tg.ChannelsGetSponsoredMessagesRequest
*github.com/gotd/td/tg.ChannelsInviteToChannelRequest
*github.com/gotd/td/tg.ChannelsJoinChannelRequest
*github.com/gotd/td/tg.ChannelsLeaveChannelRequest
*github.com/gotd/td/tg.ChannelsReadHistoryRequest
*github.com/gotd/td/tg.ChannelsReadMessageContentsRequest
*github.com/gotd/td/tg.ChannelsReorderPinnedForumTopicsRequest
*github.com/gotd/td/tg.ChannelsReorderUsernamesRequest
*github.com/gotd/td/tg.ChannelsReportAntiSpamFalsePositiveRequest
*github.com/gotd/td/tg.ChannelsReportSpamRequest
*github.com/gotd/td/tg.ChannelsSendAsPeers
*github.com/gotd/td/tg.ChannelsSetDiscussionGroupRequest
*github.com/gotd/td/tg.ChannelsSetStickersRequest
*github.com/gotd/td/tg.ChannelsToggleAntiSpamRequest
*github.com/gotd/td/tg.ChannelsToggleForumRequest
*github.com/gotd/td/tg.ChannelsToggleJoinRequestRequest
*github.com/gotd/td/tg.ChannelsToggleJoinToSendRequest
*github.com/gotd/td/tg.ChannelsToggleParticipantsHiddenRequest
*github.com/gotd/td/tg.ChannelsTogglePreHistoryHiddenRequest
*github.com/gotd/td/tg.ChannelsToggleSignaturesRequest
*github.com/gotd/td/tg.ChannelsToggleSlowModeRequest
*github.com/gotd/td/tg.ChannelsToggleUsernameRequest
*github.com/gotd/td/tg.ChannelsToggleViewForumAsMessagesRequest
*github.com/gotd/td/tg.ChannelsUpdateColorRequest
*github.com/gotd/td/tg.ChannelsUpdateEmojiStatusRequest
*github.com/gotd/td/tg.ChannelsUpdatePinnedForumTopicRequest
*github.com/gotd/td/tg.ChannelsUpdateUsernameRequest
*github.com/gotd/td/tg.ChannelsViewSponsoredMessageRequest
*github.com/gotd/td/tg.Chat
*github.com/gotd/td/tg.ChatAdminRights
*github.com/gotd/td/tg.ChatAdminWithInvites
*github.com/gotd/td/tg.ChatBannedRights
github.com/gotd/td/tg.ChatClass (interface)
*github.com/gotd/td/tg.ChatEmpty
*github.com/gotd/td/tg.ChatForbidden
*github.com/gotd/td/tg.ChatFull
github.com/gotd/td/tg.ChatFullClass (interface)
*github.com/gotd/td/tg.ChatInvite
*github.com/gotd/td/tg.ChatInviteAlready
github.com/gotd/td/tg.ChatInviteClass (interface)
*github.com/gotd/td/tg.ChatInviteExported
*github.com/gotd/td/tg.ChatInviteImporter
*github.com/gotd/td/tg.ChatInvitePeek
*github.com/gotd/td/tg.ChatInvitePublicJoinRequests
*github.com/gotd/td/tg.ChatlistsChatlistInvite
*github.com/gotd/td/tg.ChatlistsChatlistInviteAlready
github.com/gotd/td/tg.ChatlistsChatlistInviteClass (interface)
*github.com/gotd/td/tg.ChatlistsChatlistUpdates
*github.com/gotd/td/tg.ChatlistsCheckChatlistInviteRequest
*github.com/gotd/td/tg.ChatlistsDeleteExportedInviteRequest
*github.com/gotd/td/tg.ChatlistsEditExportedInviteRequest
*github.com/gotd/td/tg.ChatlistsExportChatlistInviteRequest
*github.com/gotd/td/tg.ChatlistsExportedChatlistInvite
*github.com/gotd/td/tg.ChatlistsExportedInvites
*github.com/gotd/td/tg.ChatlistsGetChatlistUpdatesRequest
*github.com/gotd/td/tg.ChatlistsGetExportedInvitesRequest
*github.com/gotd/td/tg.ChatlistsGetLeaveChatlistSuggestionsRequest
*github.com/gotd/td/tg.ChatlistsHideChatlistUpdatesRequest
*github.com/gotd/td/tg.ChatlistsJoinChatlistInviteRequest
*github.com/gotd/td/tg.ChatlistsJoinChatlistUpdatesRequest
*github.com/gotd/td/tg.ChatlistsLeaveChatlistRequest
*github.com/gotd/td/tg.ChatOnlines
*github.com/gotd/td/tg.ChatParticipant
*github.com/gotd/td/tg.ChatParticipantAdmin
github.com/gotd/td/tg.ChatParticipantClass (interface)
*github.com/gotd/td/tg.ChatParticipantCreator
*github.com/gotd/td/tg.ChatParticipants
github.com/gotd/td/tg.ChatParticipantsClass (interface)
*github.com/gotd/td/tg.ChatParticipantsForbidden
*github.com/gotd/td/tg.ChatPhoto
github.com/gotd/td/tg.ChatPhotoClass (interface)
*github.com/gotd/td/tg.ChatPhotoEmpty
*github.com/gotd/td/tg.ChatReactionsAll
github.com/gotd/td/tg.ChatReactionsClass (interface)
*github.com/gotd/td/tg.ChatReactionsNone
*github.com/gotd/td/tg.ChatReactionsSome
*github.com/gotd/td/tg.CodeSettings
*github.com/gotd/td/tg.Config
*github.com/gotd/td/tg.Contact
*github.com/gotd/td/tg.ContactStatus
*github.com/gotd/td/tg.ContactStatusVector
*github.com/gotd/td/tg.ContactsAcceptContactRequest
*github.com/gotd/td/tg.ContactsAddContactRequest
*github.com/gotd/td/tg.ContactsBlocked
github.com/gotd/td/tg.ContactsBlockedClass (interface)
*github.com/gotd/td/tg.ContactsBlockedSlice
*github.com/gotd/td/tg.ContactsBlockFromRepliesRequest
*github.com/gotd/td/tg.ContactsBlockRequest
*github.com/gotd/td/tg.ContactsContacts
github.com/gotd/td/tg.ContactsContactsClass (interface)
*github.com/gotd/td/tg.ContactsContactsNotModified
*github.com/gotd/td/tg.ContactsDeleteByPhonesRequest
*github.com/gotd/td/tg.ContactsDeleteContactsRequest
*github.com/gotd/td/tg.ContactsEditCloseFriendsRequest
*github.com/gotd/td/tg.ContactsExportContactTokenRequest
*github.com/gotd/td/tg.ContactsFound
*github.com/gotd/td/tg.ContactsGetBlockedRequest
*github.com/gotd/td/tg.ContactsGetContactIDsRequest
*github.com/gotd/td/tg.ContactsGetContactsRequest
*github.com/gotd/td/tg.ContactsGetLocatedRequest
*github.com/gotd/td/tg.ContactsGetSavedRequest
*github.com/gotd/td/tg.ContactsGetStatusesRequest
*github.com/gotd/td/tg.ContactsGetTopPeersRequest
*github.com/gotd/td/tg.ContactsImportContactsRequest
*github.com/gotd/td/tg.ContactsImportContactTokenRequest
*github.com/gotd/td/tg.ContactsImportedContacts
*github.com/gotd/td/tg.ContactsResetSavedRequest
*github.com/gotd/td/tg.ContactsResetTopPeerRatingRequest
*github.com/gotd/td/tg.ContactsResolvedPeer
*github.com/gotd/td/tg.ContactsResolvePhoneRequest
*github.com/gotd/td/tg.ContactsResolveUsernameRequest
*github.com/gotd/td/tg.ContactsSearchRequest
*github.com/gotd/td/tg.ContactsSetBlockedRequest
*github.com/gotd/td/tg.ContactsToggleTopPeersRequest
*github.com/gotd/td/tg.ContactsTopPeers
github.com/gotd/td/tg.ContactsTopPeersClass (interface)
*github.com/gotd/td/tg.ContactsTopPeersDisabled
*github.com/gotd/td/tg.ContactsTopPeersNotModified
*github.com/gotd/td/tg.ContactsUnblockRequest
*github.com/gotd/td/tg.DataJSON
*github.com/gotd/td/tg.DCOption
*github.com/gotd/td/tg.DefaultHistoryTTL
*github.com/gotd/td/tg.Dialog
github.com/gotd/td/tg.DialogClass (interface)
*github.com/gotd/td/tg.DialogFilter
*github.com/gotd/td/tg.DialogFilterChatlist
github.com/gotd/td/tg.DialogFilterClass (interface)
*github.com/gotd/td/tg.DialogFilterClassVector
*github.com/gotd/td/tg.DialogFilterDefault
*github.com/gotd/td/tg.DialogFilterSuggested
*github.com/gotd/td/tg.DialogFilterSuggestedVector
*github.com/gotd/td/tg.DialogFolder
*github.com/gotd/td/tg.DialogPeer
github.com/gotd/td/tg.DialogPeerClass (interface)
*github.com/gotd/td/tg.DialogPeerClassVector
*github.com/gotd/td/tg.DialogPeerFolder
*github.com/gotd/td/tg.Document
*github.com/gotd/td/tg.DocumentAttributeAnimated
*github.com/gotd/td/tg.DocumentAttributeAudio
github.com/gotd/td/tg.DocumentAttributeClass (interface)
*github.com/gotd/td/tg.DocumentAttributeCustomEmoji
*github.com/gotd/td/tg.DocumentAttributeFilename
*github.com/gotd/td/tg.DocumentAttributeHasStickers
*github.com/gotd/td/tg.DocumentAttributeImageSize
*github.com/gotd/td/tg.DocumentAttributeSticker
*github.com/gotd/td/tg.DocumentAttributeVideo
github.com/gotd/td/tg.DocumentClass (interface)
*github.com/gotd/td/tg.DocumentClassVector
*github.com/gotd/td/tg.DocumentEmpty
*github.com/gotd/td/tg.Double
*github.com/gotd/td/tg.DraftMessage
github.com/gotd/td/tg.DraftMessageClass (interface)
*github.com/gotd/td/tg.DraftMessageEmpty
*github.com/gotd/td/tg.EmailVerificationApple
github.com/gotd/td/tg.EmailVerificationClass (interface)
*github.com/gotd/td/tg.EmailVerificationCode
*github.com/gotd/td/tg.EmailVerificationGoogle
github.com/gotd/td/tg.EmailVerifyPurposeClass (interface)
*github.com/gotd/td/tg.EmailVerifyPurposeLoginChange
*github.com/gotd/td/tg.EmailVerifyPurposeLoginSetup
*github.com/gotd/td/tg.EmailVerifyPurposePassport
*github.com/gotd/td/tg.EmojiGroup
*github.com/gotd/td/tg.EmojiKeyword
github.com/gotd/td/tg.EmojiKeywordClass (interface)
*github.com/gotd/td/tg.EmojiKeywordDeleted
*github.com/gotd/td/tg.EmojiKeywordsDifference
*github.com/gotd/td/tg.EmojiLanguage
*github.com/gotd/td/tg.EmojiLanguageVector
*github.com/gotd/td/tg.EmojiList
github.com/gotd/td/tg.EmojiListClass (interface)
*github.com/gotd/td/tg.EmojiListNotModified
*github.com/gotd/td/tg.EmojiStatus
github.com/gotd/td/tg.EmojiStatusClass (interface)
*github.com/gotd/td/tg.EmojiStatusEmpty
*github.com/gotd/td/tg.EmojiStatusUntil
*github.com/gotd/td/tg.EmojiURL
*github.com/gotd/td/tg.EncryptedChat
github.com/gotd/td/tg.EncryptedChatClass (interface)
*github.com/gotd/td/tg.EncryptedChatDiscarded
*github.com/gotd/td/tg.EncryptedChatEmpty
*github.com/gotd/td/tg.EncryptedChatRequested
*github.com/gotd/td/tg.EncryptedChatWaiting
*github.com/gotd/td/tg.EncryptedFile
github.com/gotd/td/tg.EncryptedFileClass (interface)
*github.com/gotd/td/tg.EncryptedFileEmpty
*github.com/gotd/td/tg.EncryptedMessage
github.com/gotd/td/tg.EncryptedMessageClass (interface)
*github.com/gotd/td/tg.EncryptedMessageService
*github.com/gotd/td/tg.Error
github.com/gotd/td/tg.ExportedChatInviteClass (interface)
*github.com/gotd/td/tg.ExportedChatlistInvite
*github.com/gotd/td/tg.ExportedContactToken
*github.com/gotd/td/tg.ExportedMessageLink
*github.com/gotd/td/tg.ExportedStoryLink
*github.com/gotd/td/tg.FileHash
*github.com/gotd/td/tg.FileHashVector
*github.com/gotd/td/tg.Folder
*github.com/gotd/td/tg.FolderPeer
*github.com/gotd/td/tg.FoldersEditPeerFoldersRequest
*github.com/gotd/td/tg.ForumTopic
github.com/gotd/td/tg.ForumTopicClass (interface)
*github.com/gotd/td/tg.ForumTopicDeleted
github.com/gotd/td/tg.FullChat (interface)
*github.com/gotd/td/tg.Game
*github.com/gotd/td/tg.GeoPoint
github.com/gotd/td/tg.GeoPointClass (interface)
*github.com/gotd/td/tg.GeoPointEmpty
*github.com/gotd/td/tg.GlobalPrivacySettings
*github.com/gotd/td/tg.GroupCall
github.com/gotd/td/tg.GroupCallClass (interface)
*github.com/gotd/td/tg.GroupCallDiscarded
*github.com/gotd/td/tg.GroupCallParticipant
*github.com/gotd/td/tg.GroupCallParticipantVideo
*github.com/gotd/td/tg.GroupCallParticipantVideoSourceGroup
*github.com/gotd/td/tg.GroupCallStreamChannel
*github.com/gotd/td/tg.HelpAcceptTermsOfServiceRequest
*github.com/gotd/td/tg.HelpAppConfig
github.com/gotd/td/tg.HelpAppConfigClass (interface)
*github.com/gotd/td/tg.HelpAppConfigNotModified
*github.com/gotd/td/tg.HelpAppUpdate
github.com/gotd/td/tg.HelpAppUpdateClass (interface)
*github.com/gotd/td/tg.HelpConfigSimple
*github.com/gotd/td/tg.HelpCountriesList
github.com/gotd/td/tg.HelpCountriesListClass (interface)
*github.com/gotd/td/tg.HelpCountriesListNotModified
*github.com/gotd/td/tg.HelpCountry
*github.com/gotd/td/tg.HelpCountryCode
*github.com/gotd/td/tg.HelpDeepLinkInfo
github.com/gotd/td/tg.HelpDeepLinkInfoClass (interface)
*github.com/gotd/td/tg.HelpDeepLinkInfoEmpty
*github.com/gotd/td/tg.HelpDismissSuggestionRequest
*github.com/gotd/td/tg.HelpEditUserInfoRequest
*github.com/gotd/td/tg.HelpGetAppConfigRequest
*github.com/gotd/td/tg.HelpGetAppUpdateRequest
*github.com/gotd/td/tg.HelpGetCDNConfigRequest
*github.com/gotd/td/tg.HelpGetConfigRequest
*github.com/gotd/td/tg.HelpGetCountriesListRequest
*github.com/gotd/td/tg.HelpGetDeepLinkInfoRequest
*github.com/gotd/td/tg.HelpGetInviteTextRequest
*github.com/gotd/td/tg.HelpGetNearestDCRequest
*github.com/gotd/td/tg.HelpGetPassportConfigRequest
*github.com/gotd/td/tg.HelpGetPeerColorsRequest
*github.com/gotd/td/tg.HelpGetPeerProfileColorsRequest
*github.com/gotd/td/tg.HelpGetPremiumPromoRequest
*github.com/gotd/td/tg.HelpGetPromoDataRequest
*github.com/gotd/td/tg.HelpGetRecentMeURLsRequest
*github.com/gotd/td/tg.HelpGetSupportNameRequest
*github.com/gotd/td/tg.HelpGetSupportRequest
*github.com/gotd/td/tg.HelpGetTermsOfServiceUpdateRequest
*github.com/gotd/td/tg.HelpGetUserInfoRequest
*github.com/gotd/td/tg.HelpHidePromoDataRequest
*github.com/gotd/td/tg.HelpInviteText
*github.com/gotd/td/tg.HelpNoAppUpdate
*github.com/gotd/td/tg.HelpPassportConfig
github.com/gotd/td/tg.HelpPassportConfigClass (interface)
*github.com/gotd/td/tg.HelpPassportConfigNotModified
*github.com/gotd/td/tg.HelpPeerColorOption
*github.com/gotd/td/tg.HelpPeerColorProfileSet
*github.com/gotd/td/tg.HelpPeerColorSet
github.com/gotd/td/tg.HelpPeerColorSetClass (interface)
*github.com/gotd/td/tg.HelpPeerColors
github.com/gotd/td/tg.HelpPeerColorsClass (interface)
*github.com/gotd/td/tg.HelpPeerColorsNotModified
*github.com/gotd/td/tg.HelpPremiumPromo
*github.com/gotd/td/tg.HelpPromoData
github.com/gotd/td/tg.HelpPromoDataClass (interface)
*github.com/gotd/td/tg.HelpPromoDataEmpty
*github.com/gotd/td/tg.HelpRecentMeURLs
*github.com/gotd/td/tg.HelpSaveAppLogRequest
*github.com/gotd/td/tg.HelpSetBotUpdatesStatusRequest
*github.com/gotd/td/tg.HelpSupport
*github.com/gotd/td/tg.HelpSupportName
*github.com/gotd/td/tg.HelpTermsOfService
*github.com/gotd/td/tg.HelpTermsOfServiceUpdate
github.com/gotd/td/tg.HelpTermsOfServiceUpdateClass (interface)
*github.com/gotd/td/tg.HelpTermsOfServiceUpdateEmpty
*github.com/gotd/td/tg.HelpUserInfo
github.com/gotd/td/tg.HelpUserInfoClass (interface)
*github.com/gotd/td/tg.HelpUserInfoEmpty
*github.com/gotd/td/tg.HighScore
*github.com/gotd/td/tg.ImportedContact
*github.com/gotd/td/tg.InitConnectionRequest
*github.com/gotd/td/tg.InlineBotSwitchPM
*github.com/gotd/td/tg.InlineBotWebView
*github.com/gotd/td/tg.InlineQueryPeerTypeBotPM
*github.com/gotd/td/tg.InlineQueryPeerTypeBroadcast
*github.com/gotd/td/tg.InlineQueryPeerTypeChat
github.com/gotd/td/tg.InlineQueryPeerTypeClass (interface)
*github.com/gotd/td/tg.InlineQueryPeerTypeMegagroup
*github.com/gotd/td/tg.InlineQueryPeerTypePM
*github.com/gotd/td/tg.InlineQueryPeerTypeSameBotPM
*github.com/gotd/td/tg.InputAppEvent
github.com/gotd/td/tg.InputBotAppClass (interface)
*github.com/gotd/td/tg.InputBotAppID
*github.com/gotd/td/tg.InputBotAppShortName
github.com/gotd/td/tg.InputBotInlineMessageClass (interface)
*github.com/gotd/td/tg.InputBotInlineMessageGame
*github.com/gotd/td/tg.InputBotInlineMessageID
*github.com/gotd/td/tg.InputBotInlineMessageID64
github.com/gotd/td/tg.InputBotInlineMessageIDClass (interface)
*github.com/gotd/td/tg.InputBotInlineMessageMediaAuto
*github.com/gotd/td/tg.InputBotInlineMessageMediaContact
*github.com/gotd/td/tg.InputBotInlineMessageMediaGeo
*github.com/gotd/td/tg.InputBotInlineMessageMediaInvoice
*github.com/gotd/td/tg.InputBotInlineMessageMediaVenue
*github.com/gotd/td/tg.InputBotInlineMessageMediaWebPage
*github.com/gotd/td/tg.InputBotInlineMessageText
*github.com/gotd/td/tg.InputBotInlineResult
github.com/gotd/td/tg.InputBotInlineResultClass (interface)
*github.com/gotd/td/tg.InputBotInlineResultDocument
*github.com/gotd/td/tg.InputBotInlineResultGame
*github.com/gotd/td/tg.InputBotInlineResultPhoto
*github.com/gotd/td/tg.InputChannel
github.com/gotd/td/tg.InputChannelClass (interface)
*github.com/gotd/td/tg.InputChannelEmpty
*github.com/gotd/td/tg.InputChannelFromMessage
*github.com/gotd/td/tg.InputChatlistDialogFilter
*github.com/gotd/td/tg.InputChatPhoto
github.com/gotd/td/tg.InputChatPhotoClass (interface)
*github.com/gotd/td/tg.InputChatPhotoEmpty
*github.com/gotd/td/tg.InputChatUploadedPhoto
*github.com/gotd/td/tg.InputCheckPasswordEmpty
*github.com/gotd/td/tg.InputCheckPasswordSRP
github.com/gotd/td/tg.InputCheckPasswordSRPClass (interface)
*github.com/gotd/td/tg.InputClientProxy
*github.com/gotd/td/tg.InputDialogPeer
github.com/gotd/td/tg.InputDialogPeerClass (interface)
*github.com/gotd/td/tg.InputDialogPeerFolder
*github.com/gotd/td/tg.InputDocument
github.com/gotd/td/tg.InputDocumentClass (interface)
*github.com/gotd/td/tg.InputDocumentEmpty
*github.com/gotd/td/tg.InputDocumentFileLocation
*github.com/gotd/td/tg.InputEncryptedChat
*github.com/gotd/td/tg.InputEncryptedFile
*github.com/gotd/td/tg.InputEncryptedFileBigUploaded
github.com/gotd/td/tg.InputEncryptedFileClass (interface)
*github.com/gotd/td/tg.InputEncryptedFileEmpty
*github.com/gotd/td/tg.InputEncryptedFileLocation
*github.com/gotd/td/tg.InputEncryptedFileUploaded
*github.com/gotd/td/tg.InputFile
*github.com/gotd/td/tg.InputFileBig
github.com/gotd/td/tg.InputFileClass (interface)
*github.com/gotd/td/tg.InputFileLocation
github.com/gotd/td/tg.InputFileLocationClass (interface)
*github.com/gotd/td/tg.InputFolderPeer
github.com/gotd/td/tg.InputGameClass (interface)
*github.com/gotd/td/tg.InputGameID
*github.com/gotd/td/tg.InputGameShortName
*github.com/gotd/td/tg.InputGeoPoint
github.com/gotd/td/tg.InputGeoPointClass (interface)
*github.com/gotd/td/tg.InputGeoPointEmpty
*github.com/gotd/td/tg.InputGroupCall
*github.com/gotd/td/tg.InputGroupCallStream
github.com/gotd/td/tg.InputInvoiceClass (interface)
*github.com/gotd/td/tg.InputInvoiceMessage
*github.com/gotd/td/tg.InputInvoicePremiumGiftCode
*github.com/gotd/td/tg.InputInvoiceSlug
*github.com/gotd/td/tg.InputKeyboardButtonURLAuth
*github.com/gotd/td/tg.InputKeyboardButtonUserProfile
*github.com/gotd/td/tg.InputMediaAreaChannelPost
*github.com/gotd/td/tg.InputMediaAreaVenue
github.com/gotd/td/tg.InputMediaClass (interface)
*github.com/gotd/td/tg.InputMediaContact
*github.com/gotd/td/tg.InputMediaDice
*github.com/gotd/td/tg.InputMediaDocument
*github.com/gotd/td/tg.InputMediaDocumentExternal
*github.com/gotd/td/tg.InputMediaEmpty
*github.com/gotd/td/tg.InputMediaGame
*github.com/gotd/td/tg.InputMediaGeoLive
*github.com/gotd/td/tg.InputMediaGeoPoint
*github.com/gotd/td/tg.InputMediaInvoice
*github.com/gotd/td/tg.InputMediaPhoto
*github.com/gotd/td/tg.InputMediaPhotoExternal
*github.com/gotd/td/tg.InputMediaPoll
*github.com/gotd/td/tg.InputMediaStory
*github.com/gotd/td/tg.InputMediaUploadedDocument
*github.com/gotd/td/tg.InputMediaUploadedPhoto
*github.com/gotd/td/tg.InputMediaVenue
*github.com/gotd/td/tg.InputMediaWebPage
*github.com/gotd/td/tg.InputMessageCallbackQuery
github.com/gotd/td/tg.InputMessageClass (interface)
*github.com/gotd/td/tg.InputMessageEntityMentionName
*github.com/gotd/td/tg.InputMessageID
*github.com/gotd/td/tg.InputMessagePinned
*github.com/gotd/td/tg.InputMessageReplyTo
*github.com/gotd/td/tg.InputMessagesFilterChatPhotos
*github.com/gotd/td/tg.InputMessagesFilterContacts
*github.com/gotd/td/tg.InputMessagesFilterDocument
*github.com/gotd/td/tg.InputMessagesFilterEmpty
*github.com/gotd/td/tg.InputMessagesFilterGeo
*github.com/gotd/td/tg.InputMessagesFilterGif
*github.com/gotd/td/tg.InputMessagesFilterMusic
*github.com/gotd/td/tg.InputMessagesFilterMyMentions
*github.com/gotd/td/tg.InputMessagesFilterPhoneCalls
*github.com/gotd/td/tg.InputMessagesFilterPhotos
*github.com/gotd/td/tg.InputMessagesFilterPhotoVideo
*github.com/gotd/td/tg.InputMessagesFilterPinned
*github.com/gotd/td/tg.InputMessagesFilterRoundVideo
*github.com/gotd/td/tg.InputMessagesFilterRoundVoice
*github.com/gotd/td/tg.InputMessagesFilterURL
*github.com/gotd/td/tg.InputMessagesFilterVideo
*github.com/gotd/td/tg.InputMessagesFilterVoice
*github.com/gotd/td/tg.InputNotifyBroadcasts
*github.com/gotd/td/tg.InputNotifyChats
*github.com/gotd/td/tg.InputNotifyForumTopic
*github.com/gotd/td/tg.InputNotifyPeer
github.com/gotd/td/tg.InputNotifyPeerClass (interface)
*github.com/gotd/td/tg.InputNotifyUsers
*github.com/gotd/td/tg.InputPaymentCredentials
*github.com/gotd/td/tg.InputPaymentCredentialsApplePay
github.com/gotd/td/tg.InputPaymentCredentialsClass (interface)
*github.com/gotd/td/tg.InputPaymentCredentialsGooglePay
*github.com/gotd/td/tg.InputPaymentCredentialsSaved
*github.com/gotd/td/tg.InputPeerChannel
*github.com/gotd/td/tg.InputPeerChannelFromMessage
*github.com/gotd/td/tg.InputPeerChat
github.com/gotd/td/tg.InputPeerClass (interface)
*github.com/gotd/td/tg.InputPeerEmpty
*github.com/gotd/td/tg.InputPeerNotifySettings
*github.com/gotd/td/tg.InputPeerPhotoFileLocation
*github.com/gotd/td/tg.InputPeerPhotoFileLocationLegacy
*github.com/gotd/td/tg.InputPeerSelf
*github.com/gotd/td/tg.InputPeerUser
*github.com/gotd/td/tg.InputPeerUserFromMessage
*github.com/gotd/td/tg.InputPhoneCall
*github.com/gotd/td/tg.InputPhoneContact
*github.com/gotd/td/tg.InputPhoto
github.com/gotd/td/tg.InputPhotoClass (interface)
*github.com/gotd/td/tg.InputPhotoEmpty
*github.com/gotd/td/tg.InputPhotoFileLocation
*github.com/gotd/td/tg.InputPhotoLegacyFileLocation
*github.com/gotd/td/tg.InputPrivacyKeyAbout
*github.com/gotd/td/tg.InputPrivacyKeyAddedByPhone
*github.com/gotd/td/tg.InputPrivacyKeyChatInvite
github.com/gotd/td/tg.InputPrivacyKeyClass (interface)
*github.com/gotd/td/tg.InputPrivacyKeyForwards
*github.com/gotd/td/tg.InputPrivacyKeyPhoneCall
*github.com/gotd/td/tg.InputPrivacyKeyPhoneNumber
*github.com/gotd/td/tg.InputPrivacyKeyPhoneP2P
*github.com/gotd/td/tg.InputPrivacyKeyProfilePhoto
*github.com/gotd/td/tg.InputPrivacyKeyStatusTimestamp
*github.com/gotd/td/tg.InputPrivacyKeyVoiceMessages
github.com/gotd/td/tg.InputPrivacyRuleClass (interface)
*github.com/gotd/td/tg.InputPrivacyValueAllowAll
*github.com/gotd/td/tg.InputPrivacyValueAllowChatParticipants
*github.com/gotd/td/tg.InputPrivacyValueAllowCloseFriends
*github.com/gotd/td/tg.InputPrivacyValueAllowContacts
*github.com/gotd/td/tg.InputPrivacyValueAllowUsers
*github.com/gotd/td/tg.InputPrivacyValueDisallowAll
*github.com/gotd/td/tg.InputPrivacyValueDisallowChatParticipants
*github.com/gotd/td/tg.InputPrivacyValueDisallowContacts
*github.com/gotd/td/tg.InputPrivacyValueDisallowUsers
github.com/gotd/td/tg.InputReplyToClass (interface)
*github.com/gotd/td/tg.InputReplyToMessage
*github.com/gotd/td/tg.InputReplyToStory
*github.com/gotd/td/tg.InputReportReasonChildAbuse
*github.com/gotd/td/tg.InputReportReasonCopyright
*github.com/gotd/td/tg.InputReportReasonFake
*github.com/gotd/td/tg.InputReportReasonGeoIrrelevant
*github.com/gotd/td/tg.InputReportReasonIllegalDrugs
*github.com/gotd/td/tg.InputReportReasonOther
*github.com/gotd/td/tg.InputReportReasonPersonalDetails
*github.com/gotd/td/tg.InputReportReasonPornography
*github.com/gotd/td/tg.InputReportReasonSpam
*github.com/gotd/td/tg.InputReportReasonViolence
*github.com/gotd/td/tg.InputSecureFile
github.com/gotd/td/tg.InputSecureFileClass (interface)
*github.com/gotd/td/tg.InputSecureFileLocation
*github.com/gotd/td/tg.InputSecureFileUploaded
*github.com/gotd/td/tg.InputSecureValue
*github.com/gotd/td/tg.InputSingleMedia
github.com/gotd/td/tg.InputStickeredMediaClass (interface)
*github.com/gotd/td/tg.InputStickeredMediaDocument
*github.com/gotd/td/tg.InputStickeredMediaPhoto
*github.com/gotd/td/tg.InputStickerSetAnimatedEmoji
*github.com/gotd/td/tg.InputStickerSetAnimatedEmojiAnimations
github.com/gotd/td/tg.InputStickerSetClass (interface)
*github.com/gotd/td/tg.InputStickerSetDice
*github.com/gotd/td/tg.InputStickerSetEmojiChannelDefaultStatuses
*github.com/gotd/td/tg.InputStickerSetEmojiDefaultStatuses
*github.com/gotd/td/tg.InputStickerSetEmojiDefaultTopicIcons
*github.com/gotd/td/tg.InputStickerSetEmojiGenericAnimations
*github.com/gotd/td/tg.InputStickerSetEmpty
*github.com/gotd/td/tg.InputStickerSetID
*github.com/gotd/td/tg.InputStickerSetItem
*github.com/gotd/td/tg.InputStickerSetPremiumGifts
*github.com/gotd/td/tg.InputStickerSetShortName
*github.com/gotd/td/tg.InputStickerSetThumb
*github.com/gotd/td/tg.InputStickerSetThumbLegacy
*github.com/gotd/td/tg.InputStorePaymentGiftPremium
*github.com/gotd/td/tg.InputStorePaymentPremiumGiftCode
*github.com/gotd/td/tg.InputStorePaymentPremiumGiveaway
*github.com/gotd/td/tg.InputStorePaymentPremiumSubscription
github.com/gotd/td/tg.InputStorePaymentPurposeClass (interface)
*github.com/gotd/td/tg.InputTakeoutFileLocation
*github.com/gotd/td/tg.InputTheme
github.com/gotd/td/tg.InputThemeClass (interface)
*github.com/gotd/td/tg.InputThemeSettings
*github.com/gotd/td/tg.InputThemeSlug
*github.com/gotd/td/tg.InputUser
github.com/gotd/td/tg.InputUserClass (interface)
*github.com/gotd/td/tg.InputUserEmpty
*github.com/gotd/td/tg.InputUserFromMessage
*github.com/gotd/td/tg.InputUserSelf
*github.com/gotd/td/tg.InputWallPaper
github.com/gotd/td/tg.InputWallPaperClass (interface)
*github.com/gotd/td/tg.InputWallPaperNoFile
*github.com/gotd/td/tg.InputWallPaperSlug
*github.com/gotd/td/tg.InputWebDocument
*github.com/gotd/td/tg.InputWebFileAudioAlbumThumbLocation
*github.com/gotd/td/tg.InputWebFileGeoPointLocation
*github.com/gotd/td/tg.InputWebFileLocation
github.com/gotd/td/tg.InputWebFileLocationClass (interface)
*github.com/gotd/td/tg.Int
*github.com/gotd/td/tg.IntVector
*github.com/gotd/td/tg.Invoice
*github.com/gotd/td/tg.InvokeAfterMsgRequest
*github.com/gotd/td/tg.InvokeAfterMsgsRequest
*github.com/gotd/td/tg.InvokeWithLayerRequest
*github.com/gotd/td/tg.InvokeWithMessagesRangeRequest
*github.com/gotd/td/tg.InvokeWithoutUpdatesRequest
*github.com/gotd/td/tg.InvokeWithTakeoutRequest
*github.com/gotd/td/tg.IPPort
github.com/gotd/td/tg.IPPortClass (interface)
*github.com/gotd/td/tg.IPPortSecret
*github.com/gotd/td/tg.JSONArray
*github.com/gotd/td/tg.JSONBool
*github.com/gotd/td/tg.JSONNull
*github.com/gotd/td/tg.JSONNumber
*github.com/gotd/td/tg.JSONObject
*github.com/gotd/td/tg.JSONObjectValue
*github.com/gotd/td/tg.JSONString
github.com/gotd/td/tg.JSONValueClass (interface)
*github.com/gotd/td/tg.KeyboardButton
*github.com/gotd/td/tg.KeyboardButtonBuy
*github.com/gotd/td/tg.KeyboardButtonCallback
github.com/gotd/td/tg.KeyboardButtonClass (interface)
*github.com/gotd/td/tg.KeyboardButtonGame
*github.com/gotd/td/tg.KeyboardButtonRequestGeoLocation
*github.com/gotd/td/tg.KeyboardButtonRequestPeer
*github.com/gotd/td/tg.KeyboardButtonRequestPhone
*github.com/gotd/td/tg.KeyboardButtonRequestPoll
*github.com/gotd/td/tg.KeyboardButtonRow
*github.com/gotd/td/tg.KeyboardButtonSimpleWebView
*github.com/gotd/td/tg.KeyboardButtonSwitchInline
*github.com/gotd/td/tg.KeyboardButtonURL
*github.com/gotd/td/tg.KeyboardButtonURLAuth
*github.com/gotd/td/tg.KeyboardButtonUserProfile
*github.com/gotd/td/tg.KeyboardButtonWebView
*github.com/gotd/td/tg.LabeledPrice
*github.com/gotd/td/tg.LangPackDifference
*github.com/gotd/td/tg.LangPackLanguage
*github.com/gotd/td/tg.LangPackLanguageVector
*github.com/gotd/td/tg.LangPackString
github.com/gotd/td/tg.LangPackStringClass (interface)
*github.com/gotd/td/tg.LangPackStringClassVector
*github.com/gotd/td/tg.LangPackStringDeleted
*github.com/gotd/td/tg.LangPackStringPluralized
*github.com/gotd/td/tg.LangpackGetDifferenceRequest
*github.com/gotd/td/tg.LangpackGetLangPackRequest
*github.com/gotd/td/tg.LangpackGetLanguageRequest
*github.com/gotd/td/tg.LangpackGetLanguagesRequest
*github.com/gotd/td/tg.LangpackGetStringsRequest
*github.com/gotd/td/tg.Long
*github.com/gotd/td/tg.LongVector
*github.com/gotd/td/tg.MaskCoords
*github.com/gotd/td/tg.MediaAreaChannelPost
github.com/gotd/td/tg.MediaAreaClass (interface)
*github.com/gotd/td/tg.MediaAreaCoordinates
*github.com/gotd/td/tg.MediaAreaGeoPoint
*github.com/gotd/td/tg.MediaAreaSuggestedReaction
*github.com/gotd/td/tg.MediaAreaVenue
*github.com/gotd/td/tg.Message
*github.com/gotd/td/tg.MessageActionBotAllowed
*github.com/gotd/td/tg.MessageActionChannelCreate
*github.com/gotd/td/tg.MessageActionChannelMigrateFrom
*github.com/gotd/td/tg.MessageActionChatAddUser
*github.com/gotd/td/tg.MessageActionChatCreate
*github.com/gotd/td/tg.MessageActionChatDeletePhoto
*github.com/gotd/td/tg.MessageActionChatDeleteUser
*github.com/gotd/td/tg.MessageActionChatEditPhoto
*github.com/gotd/td/tg.MessageActionChatEditTitle
*github.com/gotd/td/tg.MessageActionChatJoinedByLink
*github.com/gotd/td/tg.MessageActionChatJoinedByRequest
*github.com/gotd/td/tg.MessageActionChatMigrateTo
github.com/gotd/td/tg.MessageActionClass (interface)
*github.com/gotd/td/tg.MessageActionContactSignUp
*github.com/gotd/td/tg.MessageActionCustomAction
*github.com/gotd/td/tg.MessageActionEmpty
*github.com/gotd/td/tg.MessageActionGameScore
*github.com/gotd/td/tg.MessageActionGeoProximityReached
*github.com/gotd/td/tg.MessageActionGiftCode
*github.com/gotd/td/tg.MessageActionGiftPremium
*github.com/gotd/td/tg.MessageActionGiveawayLaunch
*github.com/gotd/td/tg.MessageActionGiveawayResults
*github.com/gotd/td/tg.MessageActionGroupCall
*github.com/gotd/td/tg.MessageActionGroupCallScheduled
*github.com/gotd/td/tg.MessageActionHistoryClear
*github.com/gotd/td/tg.MessageActionInviteToGroupCall
*github.com/gotd/td/tg.MessageActionPaymentSent
*github.com/gotd/td/tg.MessageActionPaymentSentMe
*github.com/gotd/td/tg.MessageActionPhoneCall
*github.com/gotd/td/tg.MessageActionPinMessage
*github.com/gotd/td/tg.MessageActionRequestedPeer
*github.com/gotd/td/tg.MessageActionScreenshotTaken
*github.com/gotd/td/tg.MessageActionSecureValuesSent
*github.com/gotd/td/tg.MessageActionSecureValuesSentMe
*github.com/gotd/td/tg.MessageActionSetChatTheme
*github.com/gotd/td/tg.MessageActionSetChatWallPaper
*github.com/gotd/td/tg.MessageActionSetMessagesTTL
*github.com/gotd/td/tg.MessageActionSuggestProfilePhoto
*github.com/gotd/td/tg.MessageActionTopicCreate
*github.com/gotd/td/tg.MessageActionTopicEdit
*github.com/gotd/td/tg.MessageActionWebViewDataSent
*github.com/gotd/td/tg.MessageActionWebViewDataSentMe
github.com/gotd/td/tg.MessageClass (interface)
*github.com/gotd/td/tg.MessageEmpty
*github.com/gotd/td/tg.MessageEntityBankCard
*github.com/gotd/td/tg.MessageEntityBlockquote
*github.com/gotd/td/tg.MessageEntityBold
*github.com/gotd/td/tg.MessageEntityBotCommand
*github.com/gotd/td/tg.MessageEntityCashtag
github.com/gotd/td/tg.MessageEntityClass (interface)
*github.com/gotd/td/tg.MessageEntityCode
*github.com/gotd/td/tg.MessageEntityCustomEmoji
*github.com/gotd/td/tg.MessageEntityEmail
*github.com/gotd/td/tg.MessageEntityHashtag
*github.com/gotd/td/tg.MessageEntityItalic
*github.com/gotd/td/tg.MessageEntityMention
*github.com/gotd/td/tg.MessageEntityMentionName
*github.com/gotd/td/tg.MessageEntityPhone
*github.com/gotd/td/tg.MessageEntityPre
*github.com/gotd/td/tg.MessageEntitySpoiler
*github.com/gotd/td/tg.MessageEntityStrike
*github.com/gotd/td/tg.MessageEntityTextURL
*github.com/gotd/td/tg.MessageEntityUnderline
*github.com/gotd/td/tg.MessageEntityUnknown
*github.com/gotd/td/tg.MessageEntityURL
*github.com/gotd/td/tg.MessageExtendedMedia
github.com/gotd/td/tg.MessageExtendedMediaClass (interface)
*github.com/gotd/td/tg.MessageExtendedMediaPreview
*github.com/gotd/td/tg.MessageFwdHeader
github.com/gotd/td/tg.MessageMediaClass (interface)
*github.com/gotd/td/tg.MessageMediaContact
*github.com/gotd/td/tg.MessageMediaDice
*github.com/gotd/td/tg.MessageMediaDocument
*github.com/gotd/td/tg.MessageMediaEmpty
*github.com/gotd/td/tg.MessageMediaGame
*github.com/gotd/td/tg.MessageMediaGeo
*github.com/gotd/td/tg.MessageMediaGeoLive
*github.com/gotd/td/tg.MessageMediaGiveaway
*github.com/gotd/td/tg.MessageMediaGiveawayResults
*github.com/gotd/td/tg.MessageMediaInvoice
*github.com/gotd/td/tg.MessageMediaPhoto
*github.com/gotd/td/tg.MessageMediaPoll
*github.com/gotd/td/tg.MessageMediaStory
*github.com/gotd/td/tg.MessageMediaUnsupported
*github.com/gotd/td/tg.MessageMediaVenue
*github.com/gotd/td/tg.MessageMediaWebPage
*github.com/gotd/td/tg.MessagePeerReaction
*github.com/gotd/td/tg.MessagePeerVote
github.com/gotd/td/tg.MessagePeerVoteClass (interface)
*github.com/gotd/td/tg.MessagePeerVoteInputOption
*github.com/gotd/td/tg.MessagePeerVoteMultiple
*github.com/gotd/td/tg.MessageRange
*github.com/gotd/td/tg.MessageRangeVector
*github.com/gotd/td/tg.MessageReactions
*github.com/gotd/td/tg.MessageReplies
*github.com/gotd/td/tg.MessageReplyHeader
github.com/gotd/td/tg.MessageReplyHeaderClass (interface)
*github.com/gotd/td/tg.MessageReplyStoryHeader
*github.com/gotd/td/tg.MessageService
*github.com/gotd/td/tg.MessagesAcceptEncryptionRequest
*github.com/gotd/td/tg.MessagesAcceptURLAuthRequest
*github.com/gotd/td/tg.MessagesAddChatUserRequest
*github.com/gotd/td/tg.MessagesAffectedFoundMessages
*github.com/gotd/td/tg.MessagesAffectedHistory
*github.com/gotd/td/tg.MessagesAffectedMessages
*github.com/gotd/td/tg.MessagesAllStickers
github.com/gotd/td/tg.MessagesAllStickersClass (interface)
*github.com/gotd/td/tg.MessagesAllStickersNotModified
*github.com/gotd/td/tg.MessagesArchivedStickers
*github.com/gotd/td/tg.MessagesAvailableReactions
github.com/gotd/td/tg.MessagesAvailableReactionsClass (interface)
*github.com/gotd/td/tg.MessagesAvailableReactionsNotModified
*github.com/gotd/td/tg.MessagesBotApp
*github.com/gotd/td/tg.MessagesBotCallbackAnswer
*github.com/gotd/td/tg.MessagesBotResults
*github.com/gotd/td/tg.MessagesChannelMessages
*github.com/gotd/td/tg.MessagesChatAdminsWithInvites
*github.com/gotd/td/tg.MessagesChatFull
*github.com/gotd/td/tg.MessagesChatInviteImporters
*github.com/gotd/td/tg.MessagesChats
github.com/gotd/td/tg.MessagesChatsClass (interface)
*github.com/gotd/td/tg.MessagesChatsSlice
*github.com/gotd/td/tg.MessagesCheckChatInviteRequest
*github.com/gotd/td/tg.MessagesCheckedHistoryImportPeer
*github.com/gotd/td/tg.MessagesCheckHistoryImportPeerRequest
*github.com/gotd/td/tg.MessagesCheckHistoryImportRequest
*github.com/gotd/td/tg.MessagesClearAllDraftsRequest
*github.com/gotd/td/tg.MessagesClearRecentReactionsRequest
*github.com/gotd/td/tg.MessagesClearRecentStickersRequest
*github.com/gotd/td/tg.MessagesCreateChatRequest
*github.com/gotd/td/tg.MessagesDeleteChatRequest
*github.com/gotd/td/tg.MessagesDeleteChatUserRequest
*github.com/gotd/td/tg.MessagesDeleteExportedChatInviteRequest
*github.com/gotd/td/tg.MessagesDeleteHistoryRequest
*github.com/gotd/td/tg.MessagesDeleteMessagesRequest
*github.com/gotd/td/tg.MessagesDeletePhoneCallHistoryRequest
*github.com/gotd/td/tg.MessagesDeleteRevokedExportedChatInvitesRequest
*github.com/gotd/td/tg.MessagesDeleteScheduledMessagesRequest
*github.com/gotd/td/tg.MessagesDhConfig
github.com/gotd/td/tg.MessagesDhConfigClass (interface)
*github.com/gotd/td/tg.MessagesDhConfigNotModified
*github.com/gotd/td/tg.MessagesDialogs
github.com/gotd/td/tg.MessagesDialogsClass (interface)
*github.com/gotd/td/tg.MessagesDialogsNotModified
*github.com/gotd/td/tg.MessagesDialogsSlice
*github.com/gotd/td/tg.MessagesDiscardEncryptionRequest
*github.com/gotd/td/tg.MessagesDiscussionMessage
*github.com/gotd/td/tg.MessagesEditChatAboutRequest
*github.com/gotd/td/tg.MessagesEditChatAdminRequest
*github.com/gotd/td/tg.MessagesEditChatDefaultBannedRightsRequest
*github.com/gotd/td/tg.MessagesEditChatPhotoRequest
*github.com/gotd/td/tg.MessagesEditChatTitleRequest
*github.com/gotd/td/tg.MessagesEditExportedChatInviteRequest
*github.com/gotd/td/tg.MessagesEditInlineBotMessageRequest
*github.com/gotd/td/tg.MessagesEditMessageRequest
*github.com/gotd/td/tg.MessagesEmojiGroups
github.com/gotd/td/tg.MessagesEmojiGroupsClass (interface)
*github.com/gotd/td/tg.MessagesEmojiGroupsNotModified
*github.com/gotd/td/tg.MessagesExportChatInviteRequest
*github.com/gotd/td/tg.MessagesExportedChatInvite
github.com/gotd/td/tg.MessagesExportedChatInviteClass (interface)
*github.com/gotd/td/tg.MessagesExportedChatInviteReplaced
*github.com/gotd/td/tg.MessagesExportedChatInvites
*github.com/gotd/td/tg.MessagesFavedStickers
github.com/gotd/td/tg.MessagesFavedStickersClass (interface)
*github.com/gotd/td/tg.MessagesFavedStickersNotModified
*github.com/gotd/td/tg.MessagesFaveStickerRequest
*github.com/gotd/td/tg.MessagesFeaturedStickers
github.com/gotd/td/tg.MessagesFeaturedStickersClass (interface)
*github.com/gotd/td/tg.MessagesFeaturedStickersNotModified
github.com/gotd/td/tg.MessagesFilterClass (interface)
*github.com/gotd/td/tg.MessagesForumTopics
*github.com/gotd/td/tg.MessagesForwardMessagesRequest
*github.com/gotd/td/tg.MessagesFoundStickerSets
github.com/gotd/td/tg.MessagesFoundStickerSetsClass (interface)
*github.com/gotd/td/tg.MessagesFoundStickerSetsNotModified
*github.com/gotd/td/tg.MessagesGetAdminsWithInvitesRequest
*github.com/gotd/td/tg.MessagesGetAllDraftsRequest
*github.com/gotd/td/tg.MessagesGetAllStickersRequest
*github.com/gotd/td/tg.MessagesGetArchivedStickersRequest
*github.com/gotd/td/tg.MessagesGetAttachedStickersRequest
*github.com/gotd/td/tg.MessagesGetAttachMenuBotRequest
*github.com/gotd/td/tg.MessagesGetAttachMenuBotsRequest
*github.com/gotd/td/tg.MessagesGetAvailableReactionsRequest
*github.com/gotd/td/tg.MessagesGetBotAppRequest
*github.com/gotd/td/tg.MessagesGetBotCallbackAnswerRequest
*github.com/gotd/td/tg.MessagesGetChatInviteImportersRequest
*github.com/gotd/td/tg.MessagesGetChatsRequest
*github.com/gotd/td/tg.MessagesGetCommonChatsRequest
*github.com/gotd/td/tg.MessagesGetCustomEmojiDocumentsRequest
*github.com/gotd/td/tg.MessagesGetDefaultHistoryTTLRequest
*github.com/gotd/td/tg.MessagesGetDhConfigRequest
*github.com/gotd/td/tg.MessagesGetDialogFiltersRequest
*github.com/gotd/td/tg.MessagesGetDialogsRequest
*github.com/gotd/td/tg.MessagesGetDialogUnreadMarksRequest
*github.com/gotd/td/tg.MessagesGetDiscussionMessageRequest
*github.com/gotd/td/tg.MessagesGetDocumentByHashRequest
*github.com/gotd/td/tg.MessagesGetEmojiGroupsRequest
*github.com/gotd/td/tg.MessagesGetEmojiKeywordsDifferenceRequest
*github.com/gotd/td/tg.MessagesGetEmojiKeywordsLanguagesRequest
*github.com/gotd/td/tg.MessagesGetEmojiKeywordsRequest
*github.com/gotd/td/tg.MessagesGetEmojiProfilePhotoGroupsRequest
*github.com/gotd/td/tg.MessagesGetEmojiStatusGroupsRequest
*github.com/gotd/td/tg.MessagesGetEmojiStickersRequest
*github.com/gotd/td/tg.MessagesGetEmojiURLRequest
*github.com/gotd/td/tg.MessagesGetExportedChatInviteRequest
*github.com/gotd/td/tg.MessagesGetExportedChatInvitesRequest
*github.com/gotd/td/tg.MessagesGetExtendedMediaRequest
*github.com/gotd/td/tg.MessagesGetFavedStickersRequest
*github.com/gotd/td/tg.MessagesGetFeaturedEmojiStickersRequest
*github.com/gotd/td/tg.MessagesGetFeaturedStickersRequest
*github.com/gotd/td/tg.MessagesGetFullChatRequest
*github.com/gotd/td/tg.MessagesGetGameHighScoresRequest
*github.com/gotd/td/tg.MessagesGetHistoryRequest
*github.com/gotd/td/tg.MessagesGetInlineBotResultsRequest
*github.com/gotd/td/tg.MessagesGetInlineGameHighScoresRequest
*github.com/gotd/td/tg.MessagesGetMaskStickersRequest
*github.com/gotd/td/tg.MessagesGetMessageEditDataRequest
*github.com/gotd/td/tg.MessagesGetMessageReactionsListRequest
*github.com/gotd/td/tg.MessagesGetMessageReadParticipantsRequest
*github.com/gotd/td/tg.MessagesGetMessagesReactionsRequest
*github.com/gotd/td/tg.MessagesGetMessagesRequest
*github.com/gotd/td/tg.MessagesGetMessagesViewsRequest
*github.com/gotd/td/tg.MessagesGetOldFeaturedStickersRequest
*github.com/gotd/td/tg.MessagesGetOnlinesRequest
*github.com/gotd/td/tg.MessagesGetPeerDialogsRequest
*github.com/gotd/td/tg.MessagesGetPeerSettingsRequest
*github.com/gotd/td/tg.MessagesGetPinnedDialogsRequest
*github.com/gotd/td/tg.MessagesGetPollResultsRequest
*github.com/gotd/td/tg.MessagesGetPollVotesRequest
*github.com/gotd/td/tg.MessagesGetRecentLocationsRequest
*github.com/gotd/td/tg.MessagesGetRecentReactionsRequest
*github.com/gotd/td/tg.MessagesGetRecentStickersRequest
*github.com/gotd/td/tg.MessagesGetRepliesRequest
*github.com/gotd/td/tg.MessagesGetSavedGifsRequest
*github.com/gotd/td/tg.MessagesGetScheduledHistoryRequest
*github.com/gotd/td/tg.MessagesGetScheduledMessagesRequest
*github.com/gotd/td/tg.MessagesGetSearchCountersRequest
*github.com/gotd/td/tg.MessagesGetSearchResultsCalendarRequest
*github.com/gotd/td/tg.MessagesGetSearchResultsPositionsRequest
*github.com/gotd/td/tg.MessagesGetSplitRangesRequest
*github.com/gotd/td/tg.MessagesGetStickerSetRequest
*github.com/gotd/td/tg.MessagesGetStickersRequest
*github.com/gotd/td/tg.MessagesGetSuggestedDialogFiltersRequest
*github.com/gotd/td/tg.MessagesGetTopReactionsRequest
*github.com/gotd/td/tg.MessagesGetUnreadMentionsRequest
*github.com/gotd/td/tg.MessagesGetUnreadReactionsRequest
*github.com/gotd/td/tg.MessagesGetWebPagePreviewRequest
*github.com/gotd/td/tg.MessagesGetWebPageRequest
*github.com/gotd/td/tg.MessagesHideAllChatJoinRequestsRequest
*github.com/gotd/td/tg.MessagesHideChatJoinRequestRequest
*github.com/gotd/td/tg.MessagesHidePeerSettingsBarRequest
*github.com/gotd/td/tg.MessagesHighScores
*github.com/gotd/td/tg.MessagesHistoryImport
*github.com/gotd/td/tg.MessagesHistoryImportParsed
*github.com/gotd/td/tg.MessagesImportChatInviteRequest
*github.com/gotd/td/tg.MessagesInactiveChats
*github.com/gotd/td/tg.MessagesInitHistoryImportRequest
*github.com/gotd/td/tg.MessagesInstallStickerSetRequest
*github.com/gotd/td/tg.MessagesMarkDialogUnreadRequest
*github.com/gotd/td/tg.MessagesMessageEditData
*github.com/gotd/td/tg.MessagesMessageReactionsList
*github.com/gotd/td/tg.MessagesMessages
github.com/gotd/td/tg.MessagesMessagesClass (interface)
*github.com/gotd/td/tg.MessagesMessagesNotModified
*github.com/gotd/td/tg.MessagesMessagesSlice
*github.com/gotd/td/tg.MessagesMessageViews
*github.com/gotd/td/tg.MessagesMigrateChatRequest
*github.com/gotd/td/tg.MessagesPeerDialogs
*github.com/gotd/td/tg.MessagesPeerSettings
*github.com/gotd/td/tg.MessagesProlongWebViewRequest
*github.com/gotd/td/tg.MessagesRateTranscribedAudioRequest
*github.com/gotd/td/tg.MessagesReactions
github.com/gotd/td/tg.MessagesReactionsClass (interface)
*github.com/gotd/td/tg.MessagesReactionsNotModified
*github.com/gotd/td/tg.MessagesReadDiscussionRequest
*github.com/gotd/td/tg.MessagesReadEncryptedHistoryRequest
*github.com/gotd/td/tg.MessagesReadFeaturedStickersRequest
*github.com/gotd/td/tg.MessagesReadHistoryRequest
*github.com/gotd/td/tg.MessagesReadMentionsRequest
*github.com/gotd/td/tg.MessagesReadMessageContentsRequest
*github.com/gotd/td/tg.MessagesReadReactionsRequest
*github.com/gotd/td/tg.MessagesReceivedMessagesRequest
*github.com/gotd/td/tg.MessagesReceivedQueueRequest
*github.com/gotd/td/tg.MessagesRecentStickers
github.com/gotd/td/tg.MessagesRecentStickersClass (interface)
*github.com/gotd/td/tg.MessagesRecentStickersNotModified
*github.com/gotd/td/tg.MessagesReorderPinnedDialogsRequest
*github.com/gotd/td/tg.MessagesReorderStickerSetsRequest
*github.com/gotd/td/tg.MessagesReportEncryptedSpamRequest
*github.com/gotd/td/tg.MessagesReportReactionRequest
*github.com/gotd/td/tg.MessagesReportRequest
*github.com/gotd/td/tg.MessagesReportSpamRequest
*github.com/gotd/td/tg.MessagesRequestAppWebViewRequest
*github.com/gotd/td/tg.MessagesRequestEncryptionRequest
*github.com/gotd/td/tg.MessagesRequestSimpleWebViewRequest
*github.com/gotd/td/tg.MessagesRequestURLAuthRequest
*github.com/gotd/td/tg.MessagesRequestWebViewRequest
*github.com/gotd/td/tg.MessagesSaveDefaultSendAsRequest
*github.com/gotd/td/tg.MessagesSaveDraftRequest
*github.com/gotd/td/tg.MessagesSavedGifs
github.com/gotd/td/tg.MessagesSavedGifsClass (interface)
*github.com/gotd/td/tg.MessagesSavedGifsNotModified
*github.com/gotd/td/tg.MessagesSaveGifRequest
*github.com/gotd/td/tg.MessagesSaveRecentStickerRequest
*github.com/gotd/td/tg.MessagesSearchCounter
*github.com/gotd/td/tg.MessagesSearchCounterVector
*github.com/gotd/td/tg.MessagesSearchCustomEmojiRequest
*github.com/gotd/td/tg.MessagesSearchEmojiStickerSetsRequest
*github.com/gotd/td/tg.MessagesSearchGlobalRequest
*github.com/gotd/td/tg.MessagesSearchRequest
*github.com/gotd/td/tg.MessagesSearchResultsCalendar
*github.com/gotd/td/tg.MessagesSearchResultsPositions
*github.com/gotd/td/tg.MessagesSearchSentMediaRequest
*github.com/gotd/td/tg.MessagesSearchStickerSetsRequest
*github.com/gotd/td/tg.MessagesSendBotRequestedPeerRequest
*github.com/gotd/td/tg.MessagesSendEncryptedFileRequest
*github.com/gotd/td/tg.MessagesSendEncryptedRequest
*github.com/gotd/td/tg.MessagesSendEncryptedServiceRequest
*github.com/gotd/td/tg.MessagesSendInlineBotResultRequest
*github.com/gotd/td/tg.MessagesSendMediaRequest
*github.com/gotd/td/tg.MessagesSendMessageRequest
*github.com/gotd/td/tg.MessagesSendMultiMediaRequest
*github.com/gotd/td/tg.MessagesSendReactionRequest
*github.com/gotd/td/tg.MessagesSendScheduledMessagesRequest
*github.com/gotd/td/tg.MessagesSendScreenshotNotificationRequest
*github.com/gotd/td/tg.MessagesSendVoteRequest
*github.com/gotd/td/tg.MessagesSendWebViewDataRequest
*github.com/gotd/td/tg.MessagesSendWebViewResultMessageRequest
*github.com/gotd/td/tg.MessagesSentEncryptedFile
*github.com/gotd/td/tg.MessagesSentEncryptedMessage
github.com/gotd/td/tg.MessagesSentEncryptedMessageClass (interface)
*github.com/gotd/td/tg.MessagesSetBotCallbackAnswerRequest
*github.com/gotd/td/tg.MessagesSetBotPrecheckoutResultsRequest
*github.com/gotd/td/tg.MessagesSetBotShippingResultsRequest
*github.com/gotd/td/tg.MessagesSetChatAvailableReactionsRequest
*github.com/gotd/td/tg.MessagesSetChatThemeRequest
*github.com/gotd/td/tg.MessagesSetChatWallPaperRequest
*github.com/gotd/td/tg.MessagesSetDefaultHistoryTTLRequest
*github.com/gotd/td/tg.MessagesSetDefaultReactionRequest
*github.com/gotd/td/tg.MessagesSetEncryptedTypingRequest
*github.com/gotd/td/tg.MessagesSetGameScoreRequest
*github.com/gotd/td/tg.MessagesSetHistoryTTLRequest
*github.com/gotd/td/tg.MessagesSetInlineBotResultsRequest
*github.com/gotd/td/tg.MessagesSetInlineGameScoreRequest
*github.com/gotd/td/tg.MessagesSetTypingRequest
*github.com/gotd/td/tg.MessagesSponsoredMessages
github.com/gotd/td/tg.MessagesSponsoredMessagesClass (interface)
*github.com/gotd/td/tg.MessagesSponsoredMessagesEmpty
*github.com/gotd/td/tg.MessagesStartBotRequest
*github.com/gotd/td/tg.MessagesStartHistoryImportRequest
*github.com/gotd/td/tg.MessagesStickerSet
github.com/gotd/td/tg.MessagesStickerSetClass (interface)
*github.com/gotd/td/tg.MessagesStickerSetInstallResultArchive
github.com/gotd/td/tg.MessagesStickerSetInstallResultClass (interface)
*github.com/gotd/td/tg.MessagesStickerSetInstallResultSuccess
*github.com/gotd/td/tg.MessagesStickerSetNotModified
*github.com/gotd/td/tg.MessagesStickers
github.com/gotd/td/tg.MessagesStickersClass (interface)
*github.com/gotd/td/tg.MessagesStickersNotModified
*github.com/gotd/td/tg.MessagesToggleBotInAttachMenuRequest
*github.com/gotd/td/tg.MessagesToggleDialogPinRequest
*github.com/gotd/td/tg.MessagesToggleNoForwardsRequest
*github.com/gotd/td/tg.MessagesTogglePeerTranslationsRequest
*github.com/gotd/td/tg.MessagesToggleStickerSetsRequest
*github.com/gotd/td/tg.MessagesTranscribeAudioRequest
*github.com/gotd/td/tg.MessagesTranscribedAudio
*github.com/gotd/td/tg.MessagesTranslateResult
*github.com/gotd/td/tg.MessagesTranslateTextRequest
*github.com/gotd/td/tg.MessagesUninstallStickerSetRequest
*github.com/gotd/td/tg.MessagesUnpinAllMessagesRequest
*github.com/gotd/td/tg.MessagesUpdateDialogFilterRequest
*github.com/gotd/td/tg.MessagesUpdateDialogFiltersOrderRequest
*github.com/gotd/td/tg.MessagesUpdatePinnedMessageRequest
*github.com/gotd/td/tg.MessagesUploadEncryptedFileRequest
*github.com/gotd/td/tg.MessagesUploadImportedMediaRequest
*github.com/gotd/td/tg.MessagesUploadMediaRequest
*github.com/gotd/td/tg.MessagesVotesList
*github.com/gotd/td/tg.MessagesWebPage
*github.com/gotd/td/tg.MessageViews
github.com/gotd/td/tg.ModifiedMessagesDialogs (interface)
github.com/gotd/td/tg.ModifiedMessagesMessages (interface)
github.com/gotd/td/tg.ModifiedWebPage (interface)
*github.com/gotd/td/tg.MyBoost
*github.com/gotd/td/tg.NearestDC
github.com/gotd/td/tg.NotEmptyChat (interface)
github.com/gotd/td/tg.NotEmptyEmojiStatus (interface)
github.com/gotd/td/tg.NotEmptyEncryptedChat (interface)
github.com/gotd/td/tg.NotEmptyInputChannel (interface)
github.com/gotd/td/tg.NotEmptyInputEncryptedFile (interface)
github.com/gotd/td/tg.NotEmptyMessage (interface)
github.com/gotd/td/tg.NotEmptyPhoneCall (interface)
github.com/gotd/td/tg.NotEmptyPhotoSize (interface)
github.com/gotd/td/tg.NotEmptyUpdatesChannelDifference (interface)
github.com/gotd/td/tg.NotForbiddenChat (interface)
github.com/gotd/td/tg.NotificationSoundClass (interface)
*github.com/gotd/td/tg.NotificationSoundDefault
*github.com/gotd/td/tg.NotificationSoundLocal
*github.com/gotd/td/tg.NotificationSoundNone
*github.com/gotd/td/tg.NotificationSoundRingtone
*github.com/gotd/td/tg.NotifyBroadcasts
*github.com/gotd/td/tg.NotifyChats
*github.com/gotd/td/tg.NotifyForumTopic
*github.com/gotd/td/tg.NotifyPeer
github.com/gotd/td/tg.NotifyPeerClass (interface)
*github.com/gotd/td/tg.NotifyUsers
*github.com/gotd/td/tg.Null
*github.com/gotd/td/tg.Page
*github.com/gotd/td/tg.PageBlockAnchor
*github.com/gotd/td/tg.PageBlockAudio
*github.com/gotd/td/tg.PageBlockAuthorDate
*github.com/gotd/td/tg.PageBlockBlockquote
*github.com/gotd/td/tg.PageBlockChannel
github.com/gotd/td/tg.PageBlockClass (interface)
*github.com/gotd/td/tg.PageBlockCollage
*github.com/gotd/td/tg.PageBlockCover
*github.com/gotd/td/tg.PageBlockDetails
*github.com/gotd/td/tg.PageBlockDivider
*github.com/gotd/td/tg.PageBlockEmbed
*github.com/gotd/td/tg.PageBlockEmbedPost
*github.com/gotd/td/tg.PageBlockFooter
*github.com/gotd/td/tg.PageBlockHeader
*github.com/gotd/td/tg.PageBlockKicker
*github.com/gotd/td/tg.PageBlockList
*github.com/gotd/td/tg.PageBlockMap
*github.com/gotd/td/tg.PageBlockOrderedList
*github.com/gotd/td/tg.PageBlockParagraph
*github.com/gotd/td/tg.PageBlockPhoto
*github.com/gotd/td/tg.PageBlockPreformatted
*github.com/gotd/td/tg.PageBlockPullquote
*github.com/gotd/td/tg.PageBlockRelatedArticles
*github.com/gotd/td/tg.PageBlockSlideshow
*github.com/gotd/td/tg.PageBlockSubheader
*github.com/gotd/td/tg.PageBlockSubtitle
*github.com/gotd/td/tg.PageBlockTable
*github.com/gotd/td/tg.PageBlockTitle
*github.com/gotd/td/tg.PageBlockUnsupported
*github.com/gotd/td/tg.PageBlockVideo
*github.com/gotd/td/tg.PageCaption
*github.com/gotd/td/tg.PageListItemBlocks
github.com/gotd/td/tg.PageListItemClass (interface)
*github.com/gotd/td/tg.PageListItemText
*github.com/gotd/td/tg.PageListOrderedItemBlocks
github.com/gotd/td/tg.PageListOrderedItemClass (interface)
*github.com/gotd/td/tg.PageListOrderedItemText
*github.com/gotd/td/tg.PageRelatedArticle
*github.com/gotd/td/tg.PageTableCell
*github.com/gotd/td/tg.PageTableRow
github.com/gotd/td/tg.PasswordKdfAlgoClass (interface)
*github.com/gotd/td/tg.PasswordKdfAlgoSHA256SHA256PBKDF2HMACSHA512iter100000SHA256ModPow
*github.com/gotd/td/tg.PasswordKdfAlgoUnknown
*github.com/gotd/td/tg.PaymentCharge
*github.com/gotd/td/tg.PaymentFormMethod
*github.com/gotd/td/tg.PaymentRequestedInfo
*github.com/gotd/td/tg.PaymentSavedCredentialsCard
*github.com/gotd/td/tg.PaymentsApplyGiftCodeRequest
*github.com/gotd/td/tg.PaymentsAssignAppStoreTransactionRequest
*github.com/gotd/td/tg.PaymentsAssignPlayMarketTransactionRequest
*github.com/gotd/td/tg.PaymentsBankCardData
*github.com/gotd/td/tg.PaymentsCanPurchasePremiumRequest
*github.com/gotd/td/tg.PaymentsCheckedGiftCode
*github.com/gotd/td/tg.PaymentsCheckGiftCodeRequest
*github.com/gotd/td/tg.PaymentsClearSavedInfoRequest
*github.com/gotd/td/tg.PaymentsExportedInvoice
*github.com/gotd/td/tg.PaymentsExportInvoiceRequest
*github.com/gotd/td/tg.PaymentsGetBankCardDataRequest
*github.com/gotd/td/tg.PaymentsGetGiveawayInfoRequest
*github.com/gotd/td/tg.PaymentsGetPaymentFormRequest
*github.com/gotd/td/tg.PaymentsGetPaymentReceiptRequest
*github.com/gotd/td/tg.PaymentsGetPremiumGiftCodeOptionsRequest
*github.com/gotd/td/tg.PaymentsGetSavedInfoRequest
*github.com/gotd/td/tg.PaymentsGiveawayInfo
github.com/gotd/td/tg.PaymentsGiveawayInfoClass (interface)
*github.com/gotd/td/tg.PaymentsGiveawayInfoResults
*github.com/gotd/td/tg.PaymentsLaunchPrepaidGiveawayRequest
*github.com/gotd/td/tg.PaymentsPaymentForm
*github.com/gotd/td/tg.PaymentsPaymentReceipt
*github.com/gotd/td/tg.PaymentsPaymentResult
github.com/gotd/td/tg.PaymentsPaymentResultClass (interface)
*github.com/gotd/td/tg.PaymentsPaymentVerificationNeeded
*github.com/gotd/td/tg.PaymentsSavedInfo
*github.com/gotd/td/tg.PaymentsSendPaymentFormRequest
*github.com/gotd/td/tg.PaymentsValidatedRequestedInfo
*github.com/gotd/td/tg.PaymentsValidateRequestedInfoRequest
*github.com/gotd/td/tg.PeerBlocked
*github.com/gotd/td/tg.PeerChannel
*github.com/gotd/td/tg.PeerChat
github.com/gotd/td/tg.PeerClass (interface)
*github.com/gotd/td/tg.PeerClassVector
*github.com/gotd/td/tg.PeerColor
*github.com/gotd/td/tg.PeerLocated
github.com/gotd/td/tg.PeerLocatedClass (interface)
*github.com/gotd/td/tg.PeerNotifySettings
*github.com/gotd/td/tg.PeerSelfLocated
*github.com/gotd/td/tg.PeerSettings
*github.com/gotd/td/tg.PeerStories
*github.com/gotd/td/tg.PeerUser
*github.com/gotd/td/tg.PhoneAcceptCallRequest
*github.com/gotd/td/tg.PhoneCall
*github.com/gotd/td/tg.PhoneCallAccepted
github.com/gotd/td/tg.PhoneCallClass (interface)
*github.com/gotd/td/tg.PhoneCallDiscarded
*github.com/gotd/td/tg.PhoneCallDiscardReasonBusy
github.com/gotd/td/tg.PhoneCallDiscardReasonClass (interface)
*github.com/gotd/td/tg.PhoneCallDiscardReasonDisconnect
*github.com/gotd/td/tg.PhoneCallDiscardReasonHangup
*github.com/gotd/td/tg.PhoneCallDiscardReasonMissed
*github.com/gotd/td/tg.PhoneCallEmpty
*github.com/gotd/td/tg.PhoneCallProtocol
*github.com/gotd/td/tg.PhoneCallRequested
*github.com/gotd/td/tg.PhoneCallWaiting
*github.com/gotd/td/tg.PhoneCheckGroupCallRequest
*github.com/gotd/td/tg.PhoneConfirmCallRequest
*github.com/gotd/td/tg.PhoneConnection
github.com/gotd/td/tg.PhoneConnectionClass (interface)
*github.com/gotd/td/tg.PhoneConnectionWebrtc
*github.com/gotd/td/tg.PhoneCreateGroupCallRequest
*github.com/gotd/td/tg.PhoneDiscardCallRequest
*github.com/gotd/td/tg.PhoneDiscardGroupCallRequest
*github.com/gotd/td/tg.PhoneEditGroupCallParticipantRequest
*github.com/gotd/td/tg.PhoneEditGroupCallTitleRequest
*github.com/gotd/td/tg.PhoneExportedGroupCallInvite
*github.com/gotd/td/tg.PhoneExportGroupCallInviteRequest
*github.com/gotd/td/tg.PhoneGetCallConfigRequest
*github.com/gotd/td/tg.PhoneGetGroupCallJoinAsRequest
*github.com/gotd/td/tg.PhoneGetGroupCallRequest
*github.com/gotd/td/tg.PhoneGetGroupCallStreamChannelsRequest
*github.com/gotd/td/tg.PhoneGetGroupCallStreamRtmpURLRequest
*github.com/gotd/td/tg.PhoneGetGroupParticipantsRequest
*github.com/gotd/td/tg.PhoneGroupCall
*github.com/gotd/td/tg.PhoneGroupCallStreamChannels
*github.com/gotd/td/tg.PhoneGroupCallStreamRtmpURL
*github.com/gotd/td/tg.PhoneGroupParticipants
*github.com/gotd/td/tg.PhoneInviteToGroupCallRequest
*github.com/gotd/td/tg.PhoneJoinAsPeers
*github.com/gotd/td/tg.PhoneJoinGroupCallPresentationRequest
*github.com/gotd/td/tg.PhoneJoinGroupCallRequest
*github.com/gotd/td/tg.PhoneLeaveGroupCallPresentationRequest
*github.com/gotd/td/tg.PhoneLeaveGroupCallRequest
*github.com/gotd/td/tg.PhonePhoneCall
*github.com/gotd/td/tg.PhoneReceivedCallRequest
*github.com/gotd/td/tg.PhoneRequestCallRequest
*github.com/gotd/td/tg.PhoneSaveCallDebugRequest
*github.com/gotd/td/tg.PhoneSaveCallLogRequest
*github.com/gotd/td/tg.PhoneSaveDefaultGroupCallJoinAsRequest
*github.com/gotd/td/tg.PhoneSendSignalingDataRequest
*github.com/gotd/td/tg.PhoneSetCallRatingRequest
*github.com/gotd/td/tg.PhoneStartScheduledGroupCallRequest
*github.com/gotd/td/tg.PhoneToggleGroupCallRecordRequest
*github.com/gotd/td/tg.PhoneToggleGroupCallSettingsRequest
*github.com/gotd/td/tg.PhoneToggleGroupCallStartSubscriptionRequest
*github.com/gotd/td/tg.Photo
*github.com/gotd/td/tg.PhotoCachedSize
github.com/gotd/td/tg.PhotoClass (interface)
*github.com/gotd/td/tg.PhotoEmpty
*github.com/gotd/td/tg.PhotoPathSize
*github.com/gotd/td/tg.PhotoSize
github.com/gotd/td/tg.PhotoSizeClass (interface)
*github.com/gotd/td/tg.PhotoSizeEmpty
*github.com/gotd/td/tg.PhotoSizeProgressive
*github.com/gotd/td/tg.PhotoStrippedSize
*github.com/gotd/td/tg.PhotosDeletePhotosRequest
*github.com/gotd/td/tg.PhotosGetUserPhotosRequest
*github.com/gotd/td/tg.PhotosPhoto
*github.com/gotd/td/tg.PhotosPhotos
github.com/gotd/td/tg.PhotosPhotosClass (interface)
*github.com/gotd/td/tg.PhotosPhotosSlice
*github.com/gotd/td/tg.PhotosUpdateProfilePhotoRequest
*github.com/gotd/td/tg.PhotosUploadContactProfilePhotoRequest
*github.com/gotd/td/tg.PhotosUploadProfilePhotoRequest
*github.com/gotd/td/tg.Poll
*github.com/gotd/td/tg.PollAnswer
*github.com/gotd/td/tg.PollAnswerVoters
*github.com/gotd/td/tg.PollResults
*github.com/gotd/td/tg.PopularContact
*github.com/gotd/td/tg.PostAddress
github.com/gotd/td/tg.PostInteractionCountersClass (interface)
*github.com/gotd/td/tg.PostInteractionCountersMessage
*github.com/gotd/td/tg.PostInteractionCountersStory
*github.com/gotd/td/tg.PremiumApplyBoostRequest
*github.com/gotd/td/tg.PremiumBoostsList
*github.com/gotd/td/tg.PremiumBoostsStatus
*github.com/gotd/td/tg.PremiumGetBoostsListRequest
*github.com/gotd/td/tg.PremiumGetBoostsStatusRequest
*github.com/gotd/td/tg.PremiumGetMyBoostsRequest
*github.com/gotd/td/tg.PremiumGetUserBoostsRequest
*github.com/gotd/td/tg.PremiumGiftCodeOption
*github.com/gotd/td/tg.PremiumGiftCodeOptionVector
*github.com/gotd/td/tg.PremiumGiftOption
*github.com/gotd/td/tg.PremiumMyBoosts
*github.com/gotd/td/tg.PremiumSubscriptionOption
*github.com/gotd/td/tg.PrepaidGiveaway
*github.com/gotd/td/tg.PrivacyKeyAbout
*github.com/gotd/td/tg.PrivacyKeyAddedByPhone
*github.com/gotd/td/tg.PrivacyKeyChatInvite
github.com/gotd/td/tg.PrivacyKeyClass (interface)
*github.com/gotd/td/tg.PrivacyKeyForwards
*github.com/gotd/td/tg.PrivacyKeyPhoneCall
*github.com/gotd/td/tg.PrivacyKeyPhoneNumber
*github.com/gotd/td/tg.PrivacyKeyPhoneP2P
*github.com/gotd/td/tg.PrivacyKeyProfilePhoto
*github.com/gotd/td/tg.PrivacyKeyStatusTimestamp
*github.com/gotd/td/tg.PrivacyKeyVoiceMessages
github.com/gotd/td/tg.PrivacyRuleClass (interface)
*github.com/gotd/td/tg.PrivacyValueAllowAll
*github.com/gotd/td/tg.PrivacyValueAllowChatParticipants
*github.com/gotd/td/tg.PrivacyValueAllowCloseFriends
*github.com/gotd/td/tg.PrivacyValueAllowContacts
*github.com/gotd/td/tg.PrivacyValueAllowUsers
*github.com/gotd/td/tg.PrivacyValueDisallowAll
*github.com/gotd/td/tg.PrivacyValueDisallowChatParticipants
*github.com/gotd/td/tg.PrivacyValueDisallowContacts
*github.com/gotd/td/tg.PrivacyValueDisallowUsers
github.com/gotd/td/tg.PublicForwardClass (interface)
*github.com/gotd/td/tg.PublicForwardMessage
*github.com/gotd/td/tg.PublicForwardStory
github.com/gotd/td/tg.ReactionClass (interface)
*github.com/gotd/td/tg.ReactionCount
*github.com/gotd/td/tg.ReactionCustomEmoji
*github.com/gotd/td/tg.ReactionEmoji
*github.com/gotd/td/tg.ReactionEmpty
*github.com/gotd/td/tg.ReadParticipantDate
*github.com/gotd/td/tg.ReadParticipantDateVector
*github.com/gotd/td/tg.ReceivedNotifyMessage
*github.com/gotd/td/tg.ReceivedNotifyMessageVector
*github.com/gotd/td/tg.RecentMeURLChat
*github.com/gotd/td/tg.RecentMeURLChatInvite
github.com/gotd/td/tg.RecentMeURLClass (interface)
*github.com/gotd/td/tg.RecentMeURLStickerSet
*github.com/gotd/td/tg.RecentMeURLUnknown
*github.com/gotd/td/tg.RecentMeURLUser
*github.com/gotd/td/tg.ReplyInlineMarkup
*github.com/gotd/td/tg.ReplyKeyboardForceReply
*github.com/gotd/td/tg.ReplyKeyboardHide
*github.com/gotd/td/tg.ReplyKeyboardMarkup
github.com/gotd/td/tg.ReplyMarkupClass (interface)
github.com/gotd/td/tg.ReportReasonClass (interface)
*github.com/gotd/td/tg.RequestPeerTypeBroadcast
*github.com/gotd/td/tg.RequestPeerTypeChat
github.com/gotd/td/tg.RequestPeerTypeClass (interface)
*github.com/gotd/td/tg.RequestPeerTypeUser
*github.com/gotd/td/tg.RestrictionReason
github.com/gotd/td/tg.RichTextClass (interface)
*github.com/gotd/td/tg.SavedPhoneContact
*github.com/gotd/td/tg.SavedPhoneContactVector
*github.com/gotd/td/tg.SearchResultPosition
*github.com/gotd/td/tg.SearchResultsCalendarPeriod
*github.com/gotd/td/tg.SecureCredentialsEncrypted
*github.com/gotd/td/tg.SecureData
*github.com/gotd/td/tg.SecureFile
github.com/gotd/td/tg.SecureFileClass (interface)
*github.com/gotd/td/tg.SecureFileEmpty
github.com/gotd/td/tg.SecurePasswordKdfAlgoClass (interface)
*github.com/gotd/td/tg.SecurePasswordKdfAlgoPBKDF2HMACSHA512iter100000
*github.com/gotd/td/tg.SecurePasswordKdfAlgoSHA512
*github.com/gotd/td/tg.SecurePasswordKdfAlgoUnknown
github.com/gotd/td/tg.SecurePlainDataClass (interface)
*github.com/gotd/td/tg.SecurePlainEmail
*github.com/gotd/td/tg.SecurePlainPhone
*github.com/gotd/td/tg.SecureRequiredType
github.com/gotd/td/tg.SecureRequiredTypeClass (interface)
*github.com/gotd/td/tg.SecureRequiredTypeOneOf
*github.com/gotd/td/tg.SecureSecretSettings
*github.com/gotd/td/tg.SecureValue
*github.com/gotd/td/tg.SecureValueError
github.com/gotd/td/tg.SecureValueErrorClass (interface)
*github.com/gotd/td/tg.SecureValueErrorData
*github.com/gotd/td/tg.SecureValueErrorFile
*github.com/gotd/td/tg.SecureValueErrorFiles
*github.com/gotd/td/tg.SecureValueErrorFrontSide
*github.com/gotd/td/tg.SecureValueErrorReverseSide
*github.com/gotd/td/tg.SecureValueErrorSelfie
*github.com/gotd/td/tg.SecureValueErrorTranslationFile
*github.com/gotd/td/tg.SecureValueErrorTranslationFiles
*github.com/gotd/td/tg.SecureValueHash
*github.com/gotd/td/tg.SecureValueTypeAddress
*github.com/gotd/td/tg.SecureValueTypeBankStatement
github.com/gotd/td/tg.SecureValueTypeClass (interface)
*github.com/gotd/td/tg.SecureValueTypeDriverLicense
*github.com/gotd/td/tg.SecureValueTypeEmail
*github.com/gotd/td/tg.SecureValueTypeIdentityCard
*github.com/gotd/td/tg.SecureValueTypeInternalPassport
*github.com/gotd/td/tg.SecureValueTypePassport
*github.com/gotd/td/tg.SecureValueTypePassportRegistration
*github.com/gotd/td/tg.SecureValueTypePersonalDetails
*github.com/gotd/td/tg.SecureValueTypePhone
*github.com/gotd/td/tg.SecureValueTypeRentalAgreement
*github.com/gotd/td/tg.SecureValueTypeTemporaryRegistration
*github.com/gotd/td/tg.SecureValueTypeUtilityBill
*github.com/gotd/td/tg.SecureValueVector
*github.com/gotd/td/tg.SendAsPeer
github.com/gotd/td/tg.SendMessageActionClass (interface)
*github.com/gotd/td/tg.SendMessageCancelAction
*github.com/gotd/td/tg.SendMessageChooseContactAction
*github.com/gotd/td/tg.SendMessageChooseStickerAction
*github.com/gotd/td/tg.SendMessageEmojiInteraction
*github.com/gotd/td/tg.SendMessageEmojiInteractionSeen
*github.com/gotd/td/tg.SendMessageGamePlayAction
*github.com/gotd/td/tg.SendMessageGeoLocationAction
*github.com/gotd/td/tg.SendMessageHistoryImportAction
*github.com/gotd/td/tg.SendMessageRecordAudioAction
*github.com/gotd/td/tg.SendMessageRecordRoundAction
*github.com/gotd/td/tg.SendMessageRecordVideoAction
*github.com/gotd/td/tg.SendMessageTypingAction
*github.com/gotd/td/tg.SendMessageUploadAudioAction
*github.com/gotd/td/tg.SendMessageUploadDocumentAction
*github.com/gotd/td/tg.SendMessageUploadPhotoAction
*github.com/gotd/td/tg.SendMessageUploadRoundAction
*github.com/gotd/td/tg.SendMessageUploadVideoAction
*github.com/gotd/td/tg.ShippingOption
*github.com/gotd/td/tg.SimpleWebViewResultURL
*github.com/gotd/td/tg.SpeakingInGroupCallAction
*github.com/gotd/td/tg.SponsoredMessage
*github.com/gotd/td/tg.SponsoredWebPage
*github.com/gotd/td/tg.StatsAbsValueAndPrev
*github.com/gotd/td/tg.StatsBroadcastStats
*github.com/gotd/td/tg.StatsDateRangeDays
*github.com/gotd/td/tg.StatsGetBroadcastStatsRequest
*github.com/gotd/td/tg.StatsGetMegagroupStatsRequest
*github.com/gotd/td/tg.StatsGetMessagePublicForwardsRequest
*github.com/gotd/td/tg.StatsGetMessageStatsRequest
*github.com/gotd/td/tg.StatsGetStoryPublicForwardsRequest
*github.com/gotd/td/tg.StatsGetStoryStatsRequest
*github.com/gotd/td/tg.StatsGraph
*github.com/gotd/td/tg.StatsGraphAsync
github.com/gotd/td/tg.StatsGraphClass (interface)
*github.com/gotd/td/tg.StatsGraphError
*github.com/gotd/td/tg.StatsGroupTopAdmin
*github.com/gotd/td/tg.StatsGroupTopInviter
*github.com/gotd/td/tg.StatsGroupTopPoster
*github.com/gotd/td/tg.StatsLoadAsyncGraphRequest
*github.com/gotd/td/tg.StatsMegagroupStats
*github.com/gotd/td/tg.StatsMessageStats
*github.com/gotd/td/tg.StatsPercentValue
*github.com/gotd/td/tg.StatsPublicForwards
*github.com/gotd/td/tg.StatsStoryStats
*github.com/gotd/td/tg.StatsURL
*github.com/gotd/td/tg.StickerKeyword
*github.com/gotd/td/tg.StickerPack
*github.com/gotd/td/tg.StickerSet
*github.com/gotd/td/tg.StickerSetCovered
github.com/gotd/td/tg.StickerSetCoveredClass (interface)
*github.com/gotd/td/tg.StickerSetCoveredClassVector
*github.com/gotd/td/tg.StickerSetFullCovered
*github.com/gotd/td/tg.StickerSetMultiCovered
*github.com/gotd/td/tg.StickerSetNoCovered
*github.com/gotd/td/tg.StickersAddStickerToSetRequest
*github.com/gotd/td/tg.StickersChangeStickerPositionRequest
*github.com/gotd/td/tg.StickersChangeStickerRequest
*github.com/gotd/td/tg.StickersCheckShortNameRequest
*github.com/gotd/td/tg.StickersCreateStickerSetRequest
*github.com/gotd/td/tg.StickersDeleteStickerSetRequest
*github.com/gotd/td/tg.StickersRemoveStickerFromSetRequest
*github.com/gotd/td/tg.StickersRenameStickerSetRequest
*github.com/gotd/td/tg.StickersSetStickerSetThumbRequest
*github.com/gotd/td/tg.StickersSuggestedShortName
*github.com/gotd/td/tg.StickersSuggestShortNameRequest
*github.com/gotd/td/tg.StorageFileGif
*github.com/gotd/td/tg.StorageFileJpeg
*github.com/gotd/td/tg.StorageFileMov
*github.com/gotd/td/tg.StorageFileMp3
*github.com/gotd/td/tg.StorageFileMp4
*github.com/gotd/td/tg.StorageFilePartial
*github.com/gotd/td/tg.StorageFilePdf
*github.com/gotd/td/tg.StorageFilePng
github.com/gotd/td/tg.StorageFileTypeClass (interface)
*github.com/gotd/td/tg.StorageFileUnknown
*github.com/gotd/td/tg.StorageFileWebp
*github.com/gotd/td/tg.StoriesActivateStealthModeRequest
*github.com/gotd/td/tg.StoriesAllStories
github.com/gotd/td/tg.StoriesAllStoriesClass (interface)
*github.com/gotd/td/tg.StoriesAllStoriesNotModified
*github.com/gotd/td/tg.StoriesCanSendStoryRequest
*github.com/gotd/td/tg.StoriesDeleteStoriesRequest
*github.com/gotd/td/tg.StoriesEditStoryRequest
*github.com/gotd/td/tg.StoriesExportStoryLinkRequest
*github.com/gotd/td/tg.StoriesGetAllReadPeerStoriesRequest
*github.com/gotd/td/tg.StoriesGetAllStoriesRequest
*github.com/gotd/td/tg.StoriesGetChatsToSendRequest
*github.com/gotd/td/tg.StoriesGetPeerMaxIDsRequest
*github.com/gotd/td/tg.StoriesGetPeerStoriesRequest
*github.com/gotd/td/tg.StoriesGetPinnedStoriesRequest
*github.com/gotd/td/tg.StoriesGetStoriesArchiveRequest
*github.com/gotd/td/tg.StoriesGetStoriesByIDRequest
*github.com/gotd/td/tg.StoriesGetStoriesViewsRequest
*github.com/gotd/td/tg.StoriesGetStoryReactionsListRequest
*github.com/gotd/td/tg.StoriesGetStoryViewsListRequest
*github.com/gotd/td/tg.StoriesIncrementStoryViewsRequest
*github.com/gotd/td/tg.StoriesPeerStories
*github.com/gotd/td/tg.StoriesReadStoriesRequest
*github.com/gotd/td/tg.StoriesReportRequest
*github.com/gotd/td/tg.StoriesSendReactionRequest
*github.com/gotd/td/tg.StoriesSendStoryRequest
*github.com/gotd/td/tg.StoriesStealthMode
*github.com/gotd/td/tg.StoriesStories
*github.com/gotd/td/tg.StoriesStoryReactionsList
*github.com/gotd/td/tg.StoriesStoryViews
*github.com/gotd/td/tg.StoriesStoryViewsList
*github.com/gotd/td/tg.StoriesToggleAllStoriesHiddenRequest
*github.com/gotd/td/tg.StoriesTogglePeerStoriesHiddenRequest
*github.com/gotd/td/tg.StoriesTogglePinnedRequest
*github.com/gotd/td/tg.StoryFwdHeader
*github.com/gotd/td/tg.StoryItem
github.com/gotd/td/tg.StoryItemClass (interface)
*github.com/gotd/td/tg.StoryItemDeleted
*github.com/gotd/td/tg.StoryItemSkipped
*github.com/gotd/td/tg.StoryReaction
github.com/gotd/td/tg.StoryReactionClass (interface)
*github.com/gotd/td/tg.StoryReactionPublicForward
*github.com/gotd/td/tg.StoryReactionPublicRepost
*github.com/gotd/td/tg.StoryView
github.com/gotd/td/tg.StoryViewClass (interface)
*github.com/gotd/td/tg.StoryViewPublicForward
*github.com/gotd/td/tg.StoryViewPublicRepost
*github.com/gotd/td/tg.StoryViews
*github.com/gotd/td/tg.String
*github.com/gotd/td/tg.TestUseConfigSimpleRequest
*github.com/gotd/td/tg.TestUseErrorRequest
*github.com/gotd/td/tg.TextAnchor
*github.com/gotd/td/tg.TextBold
*github.com/gotd/td/tg.TextConcat
*github.com/gotd/td/tg.TextEmail
*github.com/gotd/td/tg.TextEmpty
*github.com/gotd/td/tg.TextFixed
*github.com/gotd/td/tg.TextImage
*github.com/gotd/td/tg.TextItalic
*github.com/gotd/td/tg.TextMarked
*github.com/gotd/td/tg.TextPhone
*github.com/gotd/td/tg.TextPlain
*github.com/gotd/td/tg.TextStrike
*github.com/gotd/td/tg.TextSubscript
*github.com/gotd/td/tg.TextSuperscript
*github.com/gotd/td/tg.TextUnderline
*github.com/gotd/td/tg.TextURL
*github.com/gotd/td/tg.TextWithEntities
*github.com/gotd/td/tg.Theme
*github.com/gotd/td/tg.ThemeSettings
*github.com/gotd/td/tg.TopPeer
*github.com/gotd/td/tg.TopPeerCategoryBotsInline
*github.com/gotd/td/tg.TopPeerCategoryBotsPM
*github.com/gotd/td/tg.TopPeerCategoryChannels
github.com/gotd/td/tg.TopPeerCategoryClass (interface)
*github.com/gotd/td/tg.TopPeerCategoryCorrespondents
*github.com/gotd/td/tg.TopPeerCategoryForwardChats
*github.com/gotd/td/tg.TopPeerCategoryForwardUsers
*github.com/gotd/td/tg.TopPeerCategoryGroups
*github.com/gotd/td/tg.TopPeerCategoryPeers
*github.com/gotd/td/tg.TopPeerCategoryPhoneCalls
*github.com/gotd/td/tg.True
*github.com/gotd/td/tg.UpdateAttachMenuBots
*github.com/gotd/td/tg.UpdateAutoSaveSettings
*github.com/gotd/td/tg.UpdateBotCallbackQuery
*github.com/gotd/td/tg.UpdateBotChatBoost
*github.com/gotd/td/tg.UpdateBotChatInviteRequester
*github.com/gotd/td/tg.UpdateBotCommands
*github.com/gotd/td/tg.UpdateBotInlineQuery
*github.com/gotd/td/tg.UpdateBotInlineSend
*github.com/gotd/td/tg.UpdateBotMenuButton
*github.com/gotd/td/tg.UpdateBotMessageReaction
*github.com/gotd/td/tg.UpdateBotMessageReactions
*github.com/gotd/td/tg.UpdateBotPrecheckoutQuery
*github.com/gotd/td/tg.UpdateBotShippingQuery
*github.com/gotd/td/tg.UpdateBotStopped
*github.com/gotd/td/tg.UpdateBotWebhookJSON
*github.com/gotd/td/tg.UpdateBotWebhookJSONQuery
*github.com/gotd/td/tg.UpdateChannel
*github.com/gotd/td/tg.UpdateChannelAvailableMessages
*github.com/gotd/td/tg.UpdateChannelMessageForwards
*github.com/gotd/td/tg.UpdateChannelMessageViews
*github.com/gotd/td/tg.UpdateChannelParticipant
*github.com/gotd/td/tg.UpdateChannelPinnedTopic
*github.com/gotd/td/tg.UpdateChannelPinnedTopics
*github.com/gotd/td/tg.UpdateChannelReadMessagesContents
*github.com/gotd/td/tg.UpdateChannelTooLong
*github.com/gotd/td/tg.UpdateChannelUserTyping
*github.com/gotd/td/tg.UpdateChannelViewForumAsMessages
*github.com/gotd/td/tg.UpdateChannelWebPage
*github.com/gotd/td/tg.UpdateChat
*github.com/gotd/td/tg.UpdateChatDefaultBannedRights
*github.com/gotd/td/tg.UpdateChatParticipant
*github.com/gotd/td/tg.UpdateChatParticipantAdd
*github.com/gotd/td/tg.UpdateChatParticipantAdmin
*github.com/gotd/td/tg.UpdateChatParticipantDelete
*github.com/gotd/td/tg.UpdateChatParticipants
*github.com/gotd/td/tg.UpdateChatUserTyping
github.com/gotd/td/tg.UpdateClass (interface)
*github.com/gotd/td/tg.UpdateConfig
*github.com/gotd/td/tg.UpdateContactsReset
*github.com/gotd/td/tg.UpdateDCOptions
*github.com/gotd/td/tg.UpdateDeleteChannelMessages
*github.com/gotd/td/tg.UpdateDeleteMessages
*github.com/gotd/td/tg.UpdateDeleteScheduledMessages
*github.com/gotd/td/tg.UpdateDialogFilter
*github.com/gotd/td/tg.UpdateDialogFilterOrder
*github.com/gotd/td/tg.UpdateDialogFilters
*github.com/gotd/td/tg.UpdateDialogPinned
*github.com/gotd/td/tg.UpdateDialogUnreadMark
*github.com/gotd/td/tg.UpdateDraftMessage
*github.com/gotd/td/tg.UpdateEditChannelMessage
*github.com/gotd/td/tg.UpdateEditMessage
*github.com/gotd/td/tg.UpdateEncryptedChatTyping
*github.com/gotd/td/tg.UpdateEncryptedMessagesRead
*github.com/gotd/td/tg.UpdateEncryption
*github.com/gotd/td/tg.UpdateFavedStickers
*github.com/gotd/td/tg.UpdateFolderPeers
*github.com/gotd/td/tg.UpdateGeoLiveViewed
*github.com/gotd/td/tg.UpdateGroupCall
*github.com/gotd/td/tg.UpdateGroupCallConnection
*github.com/gotd/td/tg.UpdateGroupCallParticipants
*github.com/gotd/td/tg.UpdateGroupInvitePrivacyForbidden
*github.com/gotd/td/tg.UpdateInlineBotCallbackQuery
*github.com/gotd/td/tg.UpdateLangPack
*github.com/gotd/td/tg.UpdateLangPackTooLong
*github.com/gotd/td/tg.UpdateLoginToken
*github.com/gotd/td/tg.UpdateMessageExtendedMedia
*github.com/gotd/td/tg.UpdateMessageID
*github.com/gotd/td/tg.UpdateMessagePoll
*github.com/gotd/td/tg.UpdateMessagePollVote
*github.com/gotd/td/tg.UpdateMessageReactions
*github.com/gotd/td/tg.UpdateMoveStickerSetToTop
*github.com/gotd/td/tg.UpdateNewAuthorization
*github.com/gotd/td/tg.UpdateNewChannelMessage
*github.com/gotd/td/tg.UpdateNewEncryptedMessage
*github.com/gotd/td/tg.UpdateNewMessage
*github.com/gotd/td/tg.UpdateNewScheduledMessage
*github.com/gotd/td/tg.UpdateNewStickerSet
*github.com/gotd/td/tg.UpdateNotifySettings
*github.com/gotd/td/tg.UpdatePeerBlocked
*github.com/gotd/td/tg.UpdatePeerHistoryTTL
*github.com/gotd/td/tg.UpdatePeerLocated
*github.com/gotd/td/tg.UpdatePeerSettings
*github.com/gotd/td/tg.UpdatePeerWallpaper
*github.com/gotd/td/tg.UpdatePendingJoinRequests
*github.com/gotd/td/tg.UpdatePhoneCall
*github.com/gotd/td/tg.UpdatePhoneCallSignalingData
*github.com/gotd/td/tg.UpdatePinnedChannelMessages
*github.com/gotd/td/tg.UpdatePinnedDialogs
*github.com/gotd/td/tg.UpdatePinnedMessages
*github.com/gotd/td/tg.UpdatePrivacy
*github.com/gotd/td/tg.UpdatePtsChanged
*github.com/gotd/td/tg.UpdateReadChannelDiscussionInbox
*github.com/gotd/td/tg.UpdateReadChannelDiscussionOutbox
*github.com/gotd/td/tg.UpdateReadChannelInbox
*github.com/gotd/td/tg.UpdateReadChannelOutbox
*github.com/gotd/td/tg.UpdateReadFeaturedEmojiStickers
*github.com/gotd/td/tg.UpdateReadFeaturedStickers
*github.com/gotd/td/tg.UpdateReadHistoryInbox
*github.com/gotd/td/tg.UpdateReadHistoryOutbox
*github.com/gotd/td/tg.UpdateReadMessagesContents
*github.com/gotd/td/tg.UpdateReadStories
*github.com/gotd/td/tg.UpdateRecentEmojiStatuses
*github.com/gotd/td/tg.UpdateRecentReactions
*github.com/gotd/td/tg.UpdateRecentStickers
*github.com/gotd/td/tg.UpdateSavedGifs
*github.com/gotd/td/tg.UpdateSavedRingtones
*github.com/gotd/td/tg.UpdateSentStoryReaction
*github.com/gotd/td/tg.UpdateServiceNotification
*github.com/gotd/td/tg.UpdateShort
*github.com/gotd/td/tg.UpdateShortChatMessage
*github.com/gotd/td/tg.UpdateShortMessage
*github.com/gotd/td/tg.UpdateShortSentMessage
*github.com/gotd/td/tg.UpdateStickerSets
*github.com/gotd/td/tg.UpdateStickerSetsOrder
*github.com/gotd/td/tg.UpdateStoriesStealthMode
*github.com/gotd/td/tg.UpdateStory
*github.com/gotd/td/tg.UpdateStoryID
*github.com/gotd/td/tg.Updates
*github.com/gotd/td/tg.UpdatesChannelDifference
github.com/gotd/td/tg.UpdatesChannelDifferenceClass (interface)
*github.com/gotd/td/tg.UpdatesChannelDifferenceEmpty
*github.com/gotd/td/tg.UpdatesChannelDifferenceTooLong
github.com/gotd/td/tg.UpdatesClass (interface)
*github.com/gotd/td/tg.UpdatesCombined
*github.com/gotd/td/tg.UpdatesDifference
github.com/gotd/td/tg.UpdatesDifferenceClass (interface)
*github.com/gotd/td/tg.UpdatesDifferenceEmpty
*github.com/gotd/td/tg.UpdatesDifferenceSlice
*github.com/gotd/td/tg.UpdatesDifferenceTooLong
*github.com/gotd/td/tg.UpdatesGetChannelDifferenceRequest
*github.com/gotd/td/tg.UpdatesGetDifferenceRequest
*github.com/gotd/td/tg.UpdatesGetStateRequest
*github.com/gotd/td/tg.UpdatesState
*github.com/gotd/td/tg.UpdatesTooLong
*github.com/gotd/td/tg.UpdateTheme
*github.com/gotd/td/tg.UpdateTranscribedAudio
*github.com/gotd/td/tg.UpdateUser
*github.com/gotd/td/tg.UpdateUserEmojiStatus
*github.com/gotd/td/tg.UpdateUserName
*github.com/gotd/td/tg.UpdateUserPhone
*github.com/gotd/td/tg.UpdateUserStatus
*github.com/gotd/td/tg.UpdateUserTyping
*github.com/gotd/td/tg.UpdateWebPage
*github.com/gotd/td/tg.UpdateWebViewResultSent
*github.com/gotd/td/tg.UploadCDNFile
github.com/gotd/td/tg.UploadCDNFileClass (interface)
*github.com/gotd/td/tg.UploadCDNFileReuploadNeeded
*github.com/gotd/td/tg.UploadFile
*github.com/gotd/td/tg.UploadFileCDNRedirect
github.com/gotd/td/tg.UploadFileClass (interface)
*github.com/gotd/td/tg.UploadGetCDNFileHashesRequest
*github.com/gotd/td/tg.UploadGetCDNFileRequest
*github.com/gotd/td/tg.UploadGetFileHashesRequest
*github.com/gotd/td/tg.UploadGetFileRequest
*github.com/gotd/td/tg.UploadGetWebFileRequest
*github.com/gotd/td/tg.UploadReuploadCDNFileRequest
*github.com/gotd/td/tg.UploadSaveBigFilePartRequest
*github.com/gotd/td/tg.UploadSaveFilePartRequest
*github.com/gotd/td/tg.UploadWebFile
*github.com/gotd/td/tg.URLAuthResultAccepted
github.com/gotd/td/tg.URLAuthResultClass (interface)
*github.com/gotd/td/tg.URLAuthResultDefault
*github.com/gotd/td/tg.URLAuthResultRequest
*github.com/gotd/td/tg.User
github.com/gotd/td/tg.UserClass (interface)
*github.com/gotd/td/tg.UserClassVector
*github.com/gotd/td/tg.UserEmpty
*github.com/gotd/td/tg.UserFull
*github.com/gotd/td/tg.Username
*github.com/gotd/td/tg.UserProfilePhoto
github.com/gotd/td/tg.UserProfilePhotoClass (interface)
*github.com/gotd/td/tg.UserProfilePhotoEmpty
github.com/gotd/td/tg.UserStatusClass (interface)
*github.com/gotd/td/tg.UserStatusEmpty
*github.com/gotd/td/tg.UserStatusLastMonth
*github.com/gotd/td/tg.UserStatusLastWeek
*github.com/gotd/td/tg.UserStatusOffline
*github.com/gotd/td/tg.UserStatusOnline
*github.com/gotd/td/tg.UserStatusRecently
*github.com/gotd/td/tg.UsersGetFullUserRequest
*github.com/gotd/td/tg.UsersGetUsersRequest
*github.com/gotd/td/tg.UsersSetSecureValueErrorsRequest
*github.com/gotd/td/tg.UsersUserFull
*github.com/gotd/td/tg.VideoSize
github.com/gotd/td/tg.VideoSizeClass (interface)
*github.com/gotd/td/tg.VideoSizeEmojiMarkup
*github.com/gotd/td/tg.VideoSizeStickerMarkup
*github.com/gotd/td/tg.WallPaper
github.com/gotd/td/tg.WallPaperClass (interface)
*github.com/gotd/td/tg.WallPaperClassVector
*github.com/gotd/td/tg.WallPaperNoFile
*github.com/gotd/td/tg.WallPaperSettings
*github.com/gotd/td/tg.WebAuthorization
*github.com/gotd/td/tg.WebDocument
github.com/gotd/td/tg.WebDocumentClass (interface)
*github.com/gotd/td/tg.WebDocumentNoProxy
*github.com/gotd/td/tg.WebPage
github.com/gotd/td/tg.WebPageAttributeClass (interface)
*github.com/gotd/td/tg.WebPageAttributeStory
*github.com/gotd/td/tg.WebPageAttributeTheme
github.com/gotd/td/tg.WebPageClass (interface)
*github.com/gotd/td/tg.WebPageEmpty
*github.com/gotd/td/tg.WebPageNotModified
*github.com/gotd/td/tg.WebPagePending
*github.com/gotd/td/tg.WebViewMessageSent
*github.com/gotd/td/tg.WebViewResultURL
go.opentelemetry.io/otel/attribute.Type
go.opentelemetry.io/otel/codes.Code
go.opentelemetry.io/otel/trace.SpanID
go.opentelemetry.io/otel/trace.SpanKind
go.opentelemetry.io/otel/trace.TraceFlags
go.opentelemetry.io/otel/trace.TraceID
go.opentelemetry.io/otel/trace.TraceState
*go.uber.org/atomic.Bool
*go.uber.org/atomic.Duration
*go.uber.org/atomic.Float32
*go.uber.org/atomic.Float64
*go.uber.org/atomic.Int32
*go.uber.org/atomic.Int64
*go.uber.org/atomic.Pointer[...]
*go.uber.org/atomic.String
*go.uber.org/atomic.Uint32
*go.uber.org/atomic.Uint64
*go.uber.org/atomic.Uintptr
go.uber.org/zap.AtomicLevel
*go.uber.org/zap/buffer.Buffer
go.uber.org/zap/zapcore.EntryCaller
go.uber.org/zap/zapcore.Level
*golang.org/x/net/internal/socks.Addr
golang.org/x/net/internal/socks.Command
golang.org/x/net/internal/socks.Reply
image.Point
image.Rectangle
image.YCbCrSubsampleRatio
internal/abi.Kind
*internal/godebug.Setting
internal/reflectlite.Type (interface)
io/fs.FileMode
math/big.Accuracy
*math/big.Float
*math/big.Int
*math/big.Rat
math/big.RoundingMode
net.Addr (interface)
net.Flags
net.HardwareAddr
net.IP
*net.IPAddr
net.IPMask
*net.IPNet
*net.TCPAddr
*net.UDPAddr
*net.UnixAddr
net/http.ConnState
*net/http.Cookie
net/netip.Addr
net/netip.AddrPort
net/netip.Prefix
*net/url.URL
*net/url.Userinfo
nhooyr.io/websocket.MessageType
nhooyr.io/websocket.StatusCode
*os.ProcessState
os.Signal (interface)
reflect.ChanDir
reflect.Kind
reflect.Type (interface)
reflect.Value
*regexp.Regexp
regexp/syntax.ErrorCode
*regexp/syntax.Inst
regexp/syntax.InstOp
regexp/syntax.Op
*regexp/syntax.Prog
*regexp/syntax.Regexp
rsc.io/qr/coding.Alpha
rsc.io/qr/coding.Level
rsc.io/qr/coding.Num
rsc.io/qr/coding.Pixel
rsc.io/qr/coding.PixelRole
rsc.io/qr/coding.String
rsc.io/qr/coding.Version
*strings.Builder
syscall.Signal
time.Duration
*time.Location
time.Month
time.Time
time.Weekday
vendor/golang.org/x/net/dns/dnsmessage.Class
vendor/golang.org/x/net/dns/dnsmessage.Name
vendor/golang.org/x/net/dns/dnsmessage.RCode
vendor/golang.org/x/net/dns/dnsmessage.Type
vendor/golang.org/x/net/http2/hpack.HeaderField
*vendor/golang.org/x/net/idna.Profile
*vendor/golang.org/x/text/unicode/bidi.Run
lockRank
stwReason
waitReason
*context.afterFuncCtx
context.backgroundCtx
*context.cancelCtx
context.stringer (interface)
*context.timerCtx
context.todoCtx
*context.valueCtx
context.withoutCancelCtx
*crypto/ecdh.nistCurve[...]
*crypto/ecdh.x25519Curve
crypto/tls.alert
*embed.file
encoding/binary.bigEndian
encoding/binary.littleEndian
encoding/binary.nativeEndian
*encoding/json.encodeState
flag.boolFlag (interface)
flag.boolFuncValue
*flag.boolValue
*flag.durationValue
*flag.float64Value
flag.funcValue
*flag.int64Value
*flag.intValue
*flag.stringValue
flag.textValue
*flag.uint64Value
*flag.uintValue
go.opentelemetry.io/otel/trace.member
internal/reflectlite.mapType
internal/reflectlite.rtype
io/fs.dirInfo
*io/fs.statDirEntry
*math/big.decimal
math/big.nat
net.addrPortUDPAddr
net.fileAddr
net.hostLookupOrder
net.pipeAddr
net.sockaddr (interface)
net/http.connectMethodKey
*net/http.contextKey
net/http.http2ContinuationFrame
net/http.http2DataFrame
net/http.http2ErrCode
net/http.http2FrameHeader
net/http.http2FrameType
net/http.http2FrameWriteRequest
net/http.http2GoAwayFrame
net/http.http2HeadersFrame
net/http.http2MetaHeadersFrame
net/http.http2PingFrame
net/http.http2PriorityFrame
net/http.http2PushPromiseFrame
net/http.http2RSTStreamFrame
net/http.http2Setting
net/http.http2SettingID
net/http.http2SettingsFrame
net/http.http2streamState
net/http.http2UnknownFrame
net/http.http2WindowUpdateFrame
*net/http.http2writeData
*net/http.socksAddr
net/http.socksCommand
net/http.socksReply
*nhooyr.io/websocket.compressionOptions
nhooyr.io/websocket.opcode
nhooyr.io/websocket.websocketAddr
*os.unixDirent
*path/filepath.statDirEntry
*reflect.rtype
*regexp.onePassInst
*strconv.decimal
*vendor/golang.org/x/text/unicode/bidi.bracketPair
stringer : fmt.Stringer
stringer : context.stringer
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
type structtype = abi.StructType (struct)
stwReason is an enumeration of reasons the world is stopping.
( stwReason) String() string
stwReason : fmt.Stringer
stwReason : stringer
stwReason : context.stringer
func stopTheWorld(reason stwReason)
func stopTheWorldGC(reason stwReason)
func stopTheWorldWithSema(reason stwReason)
func traceSTWStart(reason stwReason)
const stwAllGoroutinesStack
const stwAllThreadsSyscall
const stwForTestCountPagesInUse
const stwForTestPageCachePagesLeaked
const stwForTestReadMemStatsSlow
const stwForTestReadMetricsSlow
const stwForTestResetDebugLog
const stwGCMarkTerm
const stwGCSweepTerm
const stwGOMAXPROCS
const stwGoroutineProfile
const stwGoroutineProfileCleanup
const stwReadMemStats
const stwStartTrace
const stwStopTrace
const stwUnknown
const stwWriteHeapDump
sudog represents a g in a wait list, such as for sending/receiving
on a channel.
sudog is necessary because the g ↔ synchronization object relation
is many-to-many. A g can be on many wait lists, so there may be
many sudogs for one g; and many gs may be waiting on the same
synchronization object, so there may be many sudogs for one object.
sudogs are allocated from a special pool. Use acquireSudog and
releaseSudog to allocate and free them.
acquiretime int64
// channel
// data element (may point to stack)
g *g
isSelect indicates g is participating in a select, so
g.selectDone must be CAS'd to win the wake-up race.
next *sudog
// semaRoot binary tree
prev *sudog
releasetime int64
success indicates whether communication over channel c
succeeded. It is true if the goroutine was awoken because a
value was delivered over channel c, and false if awoken
because c was closed.
ticket uint32
// g.waiting list or semaRoot
// semaRoot
func acquireSudog() *sudog
func racenotify(c *hchan, idx uint, sg *sudog)
func racesync(c *hchan, sg *sudog)
func readyWithTime(s *sudog, traceskip int)
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer)
func releaseSudog(s *sudog)
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int)
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer)
dead indicates the goroutine was not suspended because it
is dead. This goroutine could be reused after the dead
state was observed, so the caller must not assume that it
remains dead.
g *g
stopped indicates that this suspendG transitioned the G to
_Gwaiting via g.preemptStop and thus is responsible for
readying it when done.
func suspendG(gp *g) suspendGState
func resumeG(state suspendGState)
sweepClass is a spanClass and one bit to represent whether we're currently
sweeping partial or full spans.
(*sweepClass) clear()
(*sweepClass) load() sweepClass
split returns the underlying span class as well as
whether we're interested in the full or partial
unswept lists for that class, indicated as a boolean
(true means "full").
(*sweepClass) update(sNew sweepClass)
const sweepClassDone
State of background sweep.
active tracks outstanding sweepers and the sweep
termination condition.
centralIndex is the current unswept span class.
It represents an index into the mcentral span
sets. Accessed and updated via its load and
update methods. Not protected by a lock.
Reset at mark termination.
Used by mheap.nextSpanForSweep.
g *g
lock mutex
nbgsweep uint32
npausesweep uint32
parked bool
var sweep
sweepLocked represents sweep ownership of a span.
mspan *mspan
allocBits and gcmarkBits hold pointers to a span's mark and
allocation bits. The pointers are 8 byte aligned.
There are three arenas where this data is held.
free: Dirty arenas that are no longer accessed
and can be reused.
next: Holds information to be used in the next GC cycle.
current: Information being used during this GC cycle.
previous: Information being used during the last GC cycle.
A new GC cycle starts with the call to finishsweep_m.
finishsweep_m moves the previous arena to the free arena,
the current arena to the previous arena, and
the next arena to the current arena.
The next arena is populated as the spans request
memory to hold gcmarkBits for the next GC cycle as well
as allocBits for newly allocated spans.
The pointer arithmetic is done "by hand" instead of using
arrays to avoid bounds checks along critical performance
paths.
The sweep will free the old allocBits and set allocBits to the
gcmarkBits. The gcmarkBits are replaced with a fresh zeroed
out memory.
Cache of the allocBits at freeindex. allocCache is shifted
such that the lowest bit corresponds to the bit freeindex.
allocCache holds the complement of allocBits, thus allowing
ctz (count trailing zero) to use it directly.
allocCache may contain bits beyond s.nelems; the caller must ignore
these.
// number of allocated objects
// a copy of allocCount that is stored just before this span is cached
// for divide by elemsize
// computed from sizeclass or from npages
freeIndexForScan is like freeindex, except that freeindex is
used by the allocator whereas freeIndexForScan is used by the
GC scanner. They are two fields so that the GC sees the object
is allocated only when the object and the heap bits are
initialized (see also the assignment of freeIndexForScan in
mallocgc, and issue 54596).
freeindex is the slot index between 0 and nelems at which to begin scanning
for the next free object in this span.
Each allocation scans allocBits starting at freeindex until it encounters a 0
indicating a free object. freeindex is then adjusted so that subsequent scans begin
just past the newly discovered free object.
If freeindex == nelem, this span has no free objects.
allocBits is a bitmap of objects in this span.
If n >= freeindex and allocBits[n/8] & (1<<(n%8)) is 0
then object n is free;
otherwise, object n is allocated. Bits starting at nelem are
undefined and should never be referenced.
Object n starts at address n*elemsize + (start << pageShift).
mspan.gcmarkBits *gcBits
// whether or not this span represents a user arena
// end of data in span
// For debugging. TODO: Remove.
// list of free objects in mSpanManual spans
// needs to be zeroed before allocation
TODO: Look up nelems from sizeclass and remove this field if it
helps performance.
// number of object in the span.
// next span in list, or nil if none
// number of pages in span
// bitmap for pinned objects; accessed atomically
// previous span in list, or nil if none
// size class and noscan (uint8)
// guards specials list and changes to pinnerBits
// linked list of special records sorted by offset.
// address of first byte of span aka s.base()
// mSpanInUse etc; accessed atomically (get/set methods)
mspan.sweepgen uint32
// interval for managing chunk allocation
( sweepLocked) allocBitsForIndex(allocBitIndex uintptr) markBits
( sweepLocked) base() uintptr
countAlloc returns the number of objects allocated in span s by
scanning the allocation bitmap.
decPinCounter decreases the counter. If the counter reaches 0, the counter
special is deleted and false is returned. Otherwise true is returned.
divideByElemSize returns n/s.elemsize.
n must be within [0, s.npages*_PageSize),
or may be exactly s.npages*_PageSize
if s.elemsize is from sizeclasses.go.
nosplit, because it is called by objIndex, which is nosplit
Returns only when span s has been swept.
nosplit, because it's called by isPinned, which is nosplit
( sweepLocked) inList() bool
incPinCounter is only called for multiple pins of the same object and records
the _additional_ pins.
Initialize a new span with the given start and npages.
initHeapBits initializes the heap bitmap for a span.
If this is a span of single pointer allocations, it initializes all
words to pointer. If force is true, clears all bits.
isFree reports whether the index'th object in s is unallocated.
The caller must ensure s.state is mSpanInUse, and there must have
been no preemption points since ensuring this (which could allow a
GC transition, which would allow the state to change).
isUnusedUserArenaChunk indicates that the arena chunk has been set to fault
and doesn't contain any scannable memory anymore. However, it might still be
mSpanInUse as it sits on the quarantine list, since it needs to be swept.
This is not safe to execute unless the caller has ownership of the mspan or
the world is stopped (preemption is prevented while the relevant state changes).
This is really only meant to be used by accounting tests in the runtime to
distinguish when a span shouldn't be counted (since mSpanInUse might not be
enough).
( sweepLocked) layout() (size, n, total uintptr)
( sweepLocked) markBitsForBase() markBits
( sweepLocked) markBitsForIndex(objIndex uintptr) markBits
newPinnerBits returns a pointer to 8 byte aligned bytes to be used for this
span's pinner bits. newPinneBits is used to mark objects that are pinned.
They are copied when the span is swept.
nextFreeIndex returns the index of the next free object in s at
or after s.freeindex.
There are hardware instructions that can be used to make this
faster if profiling warrants it.
nosplit, because it is called by other nosplit code like findObject
( sweepLocked) pinnerBitSize() uintptr
refillAllocCache takes 8 bytes s.allocBits starting at whichByte
and negates them so that ctz (count trailing zeros) instructions
can be used. It then places these 8 bytes into the cached 64 bit
s.allocCache.
refreshPinnerBits replaces pinnerBits with a fresh copy in the arenas for the
next GC cycle. If it does not contain any pinned objects, pinnerBits of the
span is set to nil.
reportZombies reports any marked but free objects in s and throws.
This generally means one of the following:
1. User code converted a pointer to a uintptr and then back
unsafely, and a GC ran while the uintptr was the only reference to
an object.
2. User code (or a compiler bug) constructed a bad pointer that
points to a free slot, often a past-the-end pointer.
3. The GC two cycles ago missed a pointer and freed a live object,
but it was still live in the last cycle, so this GC cycle found a
pointer to that object and marked it.
( sweepLocked) setPinnerBits(p *pinnerBits)
setUserArenaChunkToFault sets the address space for the user arena chunk to fault
and releases any underlying memory resources.
Must be in a non-preemptible state to ensure the consistency of statistics
exported to MemStats.
Find a splice point in the sorted list and check for an already existing
record. Returns a pointer to the next-reference in the list predecessor.
Returns true, if the referenced item is an exact match.
sweep frees or collects finalizers for blocks not marked in the mark phase.
It clears the mark bits in preparation for the next GC round.
Returns true if the span was returned to heap.
If preserve=true, don't return it to heap nor relink in mcentral lists;
caller takes care of it.
userArenaNextFree reserves space in the user arena for an item of the specified
type. If cap is not -1, this is for an array of cap elements of type t.
sweepLocker acquires sweep ownership of spans.
sweepGen is the sweep generation of the heap.
valid bool
tryAcquire attempts to acquire sweep ownership of span s. If it
successfully acquires ownership, it blocks sweep completion.
sysMemStat represents a global system statistic that is managed atomically.
This type must structurally be a uint64 so that mstats aligns with MemStats.
add atomically adds the sysMemStat by n.
Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
load atomically reads the value of the stat.
Must be nosplit as it is called in runtime initialization, e.g. newosproc0.
func persistentalloc(size, align uintptr, sysStat *sysMemStat) unsafe.Pointer
func persistentalloc1(size, align uintptr, sysStat *sysMemStat) *notInHeap
func sysAlloc(n uintptr, sysStat *sysMemStat) unsafe.Pointer
func sysFree(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat)
sysStatsAggregate represents system memory stats obtained
from the runtime. This set of stats is grouped together because
they're all relatively cheap to acquire and generally independent
of one another and other runtime memory stats. The fact that they
may be acquired at different times, especially with respect to
heapStatsAggregate, means there could be some skew, but because of
these stats are independent, there's no real consistency issue here.
buckHashSys uint64
gcCyclesDone uint64
gcCyclesForced uint64
gcMiscSys uint64
heapGoal uint64
mCacheInUse uint64
mCacheSys uint64
mSpanInUse uint64
mSpanSys uint64
otherSys uint64
stacksSys uint64
compute populates the sysStatsAggregate with values from the runtime.
taggedPointer is a pointer with a numeric tag.
The size of the numeric tag is GOARCH-dependent,
currently at least 10 bits.
This should only be used with pointers allocated outside the Go heap.
Pointer returns the pointer from a taggedPointer.
Tag returns the tag from a taggedPointer.
func taggedPointerPack(ptr unsafe.Pointer, tag uintptr) taggedPointer
// relocated section address
// vaddr + section length
// prelinked section vaddr
throwType indicates the current type of ongoing throw, which affects the
amount of detail printed to stderr. Higher values include more detail.
func fatalthrow(t throwType)
const throwTypeNone
const throwTypeRuntime
const throwTypeUser
timeHistogram represents a distribution of durations in
nanoseconds.
The accuracy and range of the histogram is defined by the
timeHistSubBucketBits and timeHistNumBuckets constants.
It is an HDR histogram with exponentially-distributed
buckets and linearly distributed sub-buckets.
The histogram is safe for concurrent reads and writes.
counts [160]atomic.Uint64
overflow counts all the times we got a duration that exceeded
the range counts represents.
underflow counts all the times we got a negative duration
sample. Because of how time works on some platforms, it's
possible to measure negative durations. We could ignore them,
but we record them anyway because it's better to have some
signal that it's happening than just missing samples.
record adds the given duration to the distribution.
Disallow preemptions and stack growths because this function
may run in sensitive locations.
Package time knows the layout of this structure.
If this struct changes, adjust ../time/sleep.go:/runtimeTimer.
arg any
f func(any, uintptr)
What to set the when field to in timerModifiedXX status.
period int64
If this timer is on a heap, which P's heap it is on.
puintptr rather than *p to match uintptr in the versions
of this struct defined in other packages.
seq uintptr
The status field holds one of the values below.
Timer wakes up at when, and then at when+period, ... (period > 0 only)
each time calling f(arg, now) in the timer goroutine, so f must be
a well-behaved function and not block.
when must be positive on an active timer.
func addAdjustedTimers(pp *p, moved []*timer)
func addtimer(t *timer)
func deltimer(t *timer) bool
func doaddtimer(pp *p, t *timer)
func modTimer(t *timer, when, period int64, f func(any, uintptr), arg any, seq uintptr)
func modtimer(t *timer, when, period int64, f func(any, uintptr), arg any, seq uintptr) bool
func moveTimers(pp *p, timers []*timer)
func resetTimer(t *timer, when int64) bool
func resettimer(t *timer, when int64) bool
func runOneTimer(pp *p, t *timer, now int64)
func siftdownTimer(t []*timer, i int)
func siftupTimer(t []*timer, i int) int
func startTimer(t *timer)
func stopTimer(t *timer) bool
func concatstring2(buf *tmpBuf, a0, a1 string) string
func concatstring3(buf *tmpBuf, a0, a1, a2 string) string
func concatstring4(buf *tmpBuf, a0, a1, a2, a3 string) string
func concatstring5(buf *tmpBuf, a0, a1, a2, a3, a4 string) string
func concatstrings(buf *tmpBuf, a []string) string
func rawstringtmp(buf *tmpBuf, l int) (s string, b []byte)
func slicebytetostring(buf *tmpBuf, ptr *byte, n int) string
func slicerunetostring(buf *tmpBuf, a []rune) string
func stringtoslicebyte(buf *tmpBuf, s string) []byte
traceAlloc is a non-thread-safe region allocator.
It holds a linked list of traceAllocBlock.
head traceAllocBlockPtr
off uintptr
alloc allocates n-byte block.
drop frees all previously allocated memory and resets the allocator.
traceAllocBlock is a block in traceAlloc.
traceAllocBlock is allocated from non-GC'd memory, so it must not
contain heap pointers. Writes to pointers to traceAllocBlocks do
not need write barriers.
data [65528]byte
next traceAllocBlockPtr
TODO: Since traceAllocBlock is now embedded runtime/internal/sys.NotInHeap, this isn't necessary.
( traceAllocBlockPtr) ptr() *traceAllocBlock
(*traceAllocBlockPtr) set(x *traceAllocBlock)
traceBlockReason is an enumeration of reasons a goroutine might block.
This is the interface the rest of the runtime uses to tell the
tracer why a goroutine blocked. The tracer then propagates this information
into the trace however it sees fit.
Note that traceBlockReasons should not be compared, since reasons that are
distinct by name may *not* be distinct by value.
func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceReason traceBlockReason, traceskip int)
func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
func traceGoPark(reason traceBlockReason, skip int)
const traceBlockGeneric
traceBuf is per-P tracing buffer.
// underlying buffer for traceBufHeader.buf
traceBufHeader traceBufHeader
// when we wrote the last event
// in trace.empty/full
// next write offset in arr
// scratch buffer for traceback
byte appends v to buf.
varint appends v to buf in little-endian-base-128 encoding.
varintAt writes varint v at byte position pos in buf. This always
consumes traceBytesPerNumber bytes. This is intended for when the
caller needs to reserve space for a varint but can't populate it
until later.
func traceBufPtrOf(b *traceBuf) traceBufPtr
traceBufHeader is per-P tracing buffer.
// when we wrote the last event
// in trace.empty/full
// next write offset in arr
// scratch buffer for traceback
traceBufPtr is a *traceBuf that is not traced by the garbage
collector and doesn't have write barriers. traceBufs are not
allocated from the GC'd heap, so this is safe, and are often
manipulated in contexts where write barriers are not allowed, so
this is necessary.
TODO: Since traceBuf is now embedded runtime/internal/sys.NotInHeap, this isn't necessary.
( traceBufPtr) ptr() *traceBuf
(*traceBufPtr) set(b *traceBuf)
func traceAcquireBuffer() (mp *m, pid int32, bufp *traceBufPtr)
func traceBufPtrOf(b *traceBuf) traceBufPtr
func traceFlush(buf traceBufPtr, pid int32) traceBufPtr
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFrames(bufp traceBufPtr, pcs []uintptr) ([]traceFrame, traceBufPtr)
func traceFullDequeue() traceBufPtr
func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)
func traceEventLocked(extraBytes int, mp *m, pid int32, bufp *traceBufPtr, ev byte, stackID uint32, skip int, args ...uint64)
func traceFlush(buf traceBufPtr, pid int32) traceBufPtr
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFrames(bufp traceBufPtr, pcs []uintptr) ([]traceFrame, traceBufPtr)
func traceFullQueue(buf traceBufPtr)
func traceString(bufp *traceBufPtr, pid int32, s string) (uint64, *traceBufPtr)
PC uintptr
fileID uint64
funcID uint64
line uint64
func traceFrameForPC(buf traceBufPtr, pid int32, f Frame) (traceFrame, traceBufPtr)
func traceFrames(bufp traceBufPtr, pcs []uintptr) ([]traceFrame, traceBufPtr)
traceStack is a single stack in traceStackTable.
hash uintptr
id uint32
link traceStackPtr
n int
// real type [n]uintptr
stack returns slice of PCs.
( traceStackPtr) ptr() *traceStack
traceStackTable maps stack traces (arrays of PC's) to unique uint32 ids.
It is lock-free for reading.
// Must be acquired on the system stack
mem traceAlloc
seq uint32
tab [8192]traceStackPtr
dump writes all previously cached stacks to trace buffers,
releases all memory and resets state.
This must run on the system stack because it calls traceFlush.
find checks if the stack trace pcs is already present in the table.
newStack allocates a new stack of size n.
put returns a unique id for the stack trace pcs and caches it in the table,
if it sees the trace for the first time.
// init tracing activation status
// heap allocations
// heap allocated bytes
// init goroutine id
var inittrace
traceTime represents a timestamp for the trace.
func traceClockNow() traceTime
__fpregs_mem fpstate
uc_flags uint64
uc_link *ucontext
uc_mcontext mcontext
uc_sigmask usigset
uc_stack stackt
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
The specialized convTx routines need a type descriptor to use when calling mallocgc.
We don't need the type to be exact, just to have the correct size, alignment, and pointer-ness.
However, when debugging, it'd be nice to have some indication in mallocgc where the types came from,
so we use named types here.
We then construct interface values of these types,
and then extract the type word to use as needed.
type uncommontype = abi.UncommonType (struct)
An unwinder iterates the physical stack frames of a Go sack.
Typical use of an unwinder looks like:
var u unwinder
for u.init(gp, 0); u.valid(); u.next() {
// ... use frame info in u ...
}
Implementation note: This is carefully structured to be pointer-free because
tracebacks happen in places that disallow write barriers (e.g., signals).
Even if this is stack-allocated, its pointer-receiver methods don't know that
their receiver is on the stack, so they still emit write barriers. Here we
address that by carefully avoiding any pointers in this type. Another
approach would be to split this into a mutable part that's passed by pointer
but contains no pointers itself and an immutable part that's passed and
returned by value and can contain pointers. We could potentially hide that
we're doing that in trivial methods that are inlined into the caller that has
the stack allocation, but that's fragile.
cache is used to cache pcvalue lookups.
calleeFuncID is the function ID of the caller of the current
frame.
cgoCtxt is the index into g.cgoCtxt of the next frame on the cgo stack.
The cgo stack is unwound in tandem with the Go stack as we find marker frames.
flags are the flags to this unwind. Some of these are updated as we
unwind (see the flags documentation).
frame is the current physical stack frame, or all 0s if
there is no frame.
g is the G who's stack is being unwound. If the
unwindJumpStack flag is set and the unwinder jumps stacks,
this will be different from the initial G.
cgoCallers populates pcBuf with the cgo callers of the current frame using
the registered cgo unwinder. It returns the number of PCs written to pcBuf.
If the current frame is not a cgo frame or if there's no registered cgo
unwinder, it returns 0.
finishInternal is an unwinder-internal helper called after the stack has been
exhausted. It sets the unwinder to an invalid state and checks that it
successfully unwound the entire stack.
init initializes u to start unwinding gp's stack and positions the
iterator on gp's innermost frame. gp must not be the current G.
A single unwinder can be reused for multiple unwinds.
(*unwinder) initAt(pc0, sp0, lr0 uintptr, gp *g, flags unwindFlags)
(*unwinder) next()
resolveInternal fills in u.frame based on u.frame.fn, pc, and sp.
innermost indicates that this is the first resolve on this stack. If
innermost is set, isSyscall indicates that the PC/SP was retrieved from
gp.syscall*; this is otherwise ignored.
On entry, u.frame contains:
- fn is the running function.
- pc is the PC in the running function.
- sp is the stack pointer at that program counter.
- For the innermost frame on LR machines, lr is the program counter that called fn.
On return, u.frame contains:
- fp is the stack pointer of the caller.
- lr is the program counter that called fn.
- varp, argp, and continpc are populated for the current frame.
If fn is a stack-jumping function, resolveInternal can change the entire
frame state to follow that stack jump.
This is internal to unwinder.
symPC returns the PC that should be used for symbolizing the current frame.
Specifically, this is the PC of the last instruction executed in this frame.
If this frame did a normal call, then frame.pc is a return PC, so this will
return frame.pc-1, which points into the CALL instruction. If the frame was
interrupted by a signal (e.g., profiler, segv, etc) then frame.pc is for the
trapped instruction, so this returns frame.pc. See issue #34123. Finally,
frame.pc can be at function entry when the frame is initialized without
actually running code, like in runtime.mstart, in which case this returns
frame.pc because that's the best we can do.
(*unwinder) valid() bool
func traceback2(u *unwinder, showRuntime bool, skip, max int) (n, lastN int)
func tracebackPCs(u *unwinder, skip int, pcBuf []uintptr) int
unwindFlags control the behavior of various unwinders.
func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)
const unwindJumpStack
const unwindPrintErrors
const unwindSilentErrors
const unwindTrap
active is the user arena chunk we're currently allocating into.
defunct is true if free has been called on this arena.
This is just a best-effort way to discover a concurrent allocation
and free. Also used to detect a double-free.
full is a list of full chunks that have not enough free memory left, and
that we'll free once this user arena is freed.
Can't use mSpanList here because it's not-in-heap.
refs is a set of references to the arena chunks so that they're kept alive.
The last reference in the list always refers to active, while the rest of
them correspond to fullList. Specifically, the head of fullList is the
second-to-last one, fullList.next is the third-to-last, and so on.
In other words, every time a new chunk becomes active, its appended to this
list.
alloc reserves space in the current chunk or calls refill and reserves space
in a new chunk. If cap is negative, the type will be taken literally, otherwise
it will be considered as an element type for a slice backing store with capacity
cap.
free returns the userArena's chunks back to mheap and marks it as defunct.
Must be called at most once for any given arena.
This operation is not safe to call concurrently with other operations on the
same arena.
new allocates a new object of the provided type into the arena, and returns
its pointer.
This operation is not safe to call concurrently with other operations on the
same arena.
refill inserts the current arena chunk onto the full list and obtains a new
one, either from the partial list or allocating a new one, both from mheap.
slice allocates a new slice backing store. slice must be a pointer to a slice
(i.e. *[]T), because userArenaSlice will update the slice directly.
cap determines the capacity of the slice backing store and must be non-negative.
This operation is not safe to call concurrently with other operations on the
same arena.
func newUserArena() *userArena
bucket []uint32
chain []uint32
isGNUHash bool
Load information
// loadAddr - recorded vaddr
symOff uint32
symstrings *[1125899906842623]byte
Symbol table
valid bool
verdef *elfVerdef
Version table
func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32
func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr)
func vdsoParseSymbols(info *vdsoInfo, version int32)
verHash uint32
version string
func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32
var vdsoLinuxVersion
first *sudog
last *sudog
(*waitq) dequeue() *sudog
(*waitq) dequeueSudoG(sgp *sudog)
(*waitq) enqueue(sgp *sudog)
A waitReason explains why a goroutine has been stopped.
See gopark. Do not re-use waitReasons, add new ones.
( waitReason) String() string
( waitReason) isMutexWait() bool
waitReason : fmt.Stringer
waitReason : stringer
waitReason : context.stringer
func casGToWaiting(gp *g, old uint32, reason waitReason)
func gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason waitReason, traceReason traceBlockReason, traceskip int)
func goparkunlock(lock *mutex, reason waitReason, traceReason traceBlockReason, traceskip int)
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason)
const waitReasonChanReceive
const waitReasonChanReceiveNilChan
const waitReasonChanSend
const waitReasonChanSendNilChan
const waitReasonDebugCall
const waitReasonDumpingHeap
const waitReasonFinalizerWait
const waitReasonForceGCIdle
const waitReasonGarbageCollection
const waitReasonGarbageCollectionScan
const waitReasonGCAssistMarking
const waitReasonGCAssistWait
const waitReasonGCMarkTermination
const waitReasonGCScavengeWait
const waitReasonGCSweepWait
const waitReasonGCWorkerActive
const waitReasonGCWorkerIdle
const waitReasonIOWait
const waitReasonPanicWait
const waitReasonPreempted
const waitReasonSelect
const waitReasonSelectNoCases
const waitReasonSemacquire
const waitReasonSleep
const waitReasonStoppingTheWorld
const waitReasonSyncCondWait
const waitReasonSyncMutexLock
const waitReasonSyncRWMutexLock
const waitReasonSyncRWMutexRLock
const waitReasonTraceReaderBlocked
const waitReasonWaitForGCCycle
const waitReasonZero
wbBuf is a per-P buffer of pointers queued by the write barrier.
This buffer is flushed to the GC workbufs when it fills up and on
various GC transitions.
This is closely related to a "sequential store buffer" (SSB),
except that SSBs are usually used for maintaining remembered sets,
while this is used for marking.
buf stores a series of pointers to execute write barriers on.
end points to just past the end of buf. It must not be a
pointer type because it points past the end of buf and must
be updated without write barriers.
next points to the next slot in buf. It must not be a
pointer type because it can point past the end of buf and
must be updated without write barriers.
This is a pointer rather than an index to optimize the
write barrier assembly.
discard resets b's next pointer, but not its end pointer.
This must be nosplit because it's called by wbBufFlush.
empty reports whether b contains no pointers.
getX returns space in the write barrier buffer to store X pointers.
getX will flush the buffer if necessary. Callers should use this as:
buf := &getg().m.p.ptr().wbBuf
p := buf.get2()
p[0], p[1] = old, new
... actual memory write ...
The caller must ensure there are no preemption points during the
above sequence. There must be no preemption points while buf is in
use because it is a per-P resource. There must be no preemption
points between the buffer put and the write to memory because this
could allow a GC phase change, which could result in missed write
barriers.
getX must be nowritebarrierrec to because write barriers here would
corrupt the write barrier buffer. It (and everything it calls, if
it called anything) has to be nosplit to avoid scheduling on to a
different P and a different buffer.
(*wbBuf) get2() *[2]uintptr
reset empties b by resetting its next and end pointers.
account for the above fields
workbufhdr workbufhdr
workbufhdr.nobj int
// must be first
(*workbuf) checkempty()
(*workbuf) checknonempty()
func getempty() *workbuf
func handoff(b *workbuf) *workbuf
func trygetfull() *workbuf
func handoff(b *workbuf) *workbuf
func putempty(b *workbuf)
func putfull(b *workbuf)
assistQueue is a queue of assists that are blocked because
there was neither enough credit to steal or enough work to
do.
Base indexes of each root type. Set by gcMarkRootPrepare.
Base indexes of each root type. Set by gcMarkRootPrepare.
Base indexes of each root type. Set by gcMarkRootPrepare.
Base indexes of each root type. Set by gcMarkRootPrepare.
Base indexes of each root type. Set by gcMarkRootPrepare.
// cas to 1 when at a background mark completion point
// signal background mark worker has started
bytesMarked is the number of bytes marked this cycle. This
includes bytes blackened in scanned objects, noscan objects
that go straight to black, and permagrey objects scanned by
markroot during the concurrent scan phase. This is updated
atomically during the cycle. Updates may be batched
arbitrarily, since the value is only read at the end of the
cycle.
Because of benign races during marking, this number may not
be the exact number of marked bytes, but it should be very
close.
Put this field here because it needs 64-bit atomic access
(and thus 8-byte alignment even on 32-bit architectures).
Cumulative estimated CPU usage.
// GC assists
// GC dedicated mark workers + pauses
// GC idle mark workers
// GC pauses (all GOMAXPROCS, even if just 1 is running)
cpuStats.gcTotalTime int64
// Time Ps spent in _Pidle.
// background scavenger
// scavenge assists
cpuStats.scavengeTotalTime int64
// GOMAXPROCS * (monotonic wall clock time elapsed)
// Time Ps spent in _Prunning or _Psyscall that's not any of the above.
cycles is the number of completed GC cycles, where a GC
cycle is sweep termination, mark, mark termination, and
sweep. This differs from memstats.numgc, which is
incremented at mark termination.
// lock-free list of empty blocks workbuf
// lock-free list of full blocks workbuf
debug.gctrace heap sizes for this cycle.
debug.gctrace heap sizes for this cycle.
debug.gctrace heap sizes for this cycle.
initialHeapLive is the value of gcController.heapLive at the
beginning of this GC cycle.
markDoneSema protects transitions from mark to mark termination.
// number of markroot jobs
// next markroot job
Timing/utilization stats for this cycle.
mode is the concurrency mode of the current GC cycle.
Number of roots of various root types. Set by gcMarkRootPrepare.
nStackRoots == len(stackRoots), but we have nStackRoots for
consistency.
Number of roots of various root types. Set by gcMarkRootPrepare.
nStackRoots == len(stackRoots), but we have nStackRoots for
consistency.
Number of roots of various root types. Set by gcMarkRootPrepare.
nStackRoots == len(stackRoots), but we have nStackRoots for
consistency.
Number of roots of various root types. Set by gcMarkRootPrepare.
nStackRoots == len(stackRoots), but we have nStackRoots for
consistency.
nproc uint32
nwait uint32
// total STW time this cycle
// nanotime() of last STW
stackRoots is a snapshot of all of the Gs that existed
before the beginning of concurrent marking. The backing
store of this must not be modified because it might be
shared with allgs.
Each type of GC state transition is protected by a lock.
Since multiple threads can simultaneously detect the state
transition condition, any thread that detects a transition
condition must acquire the appropriate transition lock,
re-check the transition condition and return if it no
longer holds or perform the transition if it does.
Likewise, any transition must invalidate the transition
condition before releasing the lock. This ensures that each
transition is performed by exactly one thread and threads
that need the transition to happen block until it has
happened.
startSema protects the transition from "off" to mark or
mark termination.
Timing/utilization stats for this cycle.
sweepWaiters is a list of blocked goroutines to wake when
we transition from mark termination to sweep.
// nanotime() of phase start
// nanotime() of phase start
// nanotime() of phase start
// nanotime() of phase start
tstart int64
userForced indicates the current GC cycle was forced by an
explicit user call.
wbufSpans struct{lock mutex; free mSpanList; busy mSpanList}
accumulate takes a cpuStats and adds in the current state of all GC CPU
counters.
gcMarkPhase indicates that we're in the mark phase and that certain counter
values should be used.
var work
// address that the low bit of mask represents the pointer state of.
// number of low-order bits to not overwrite
// some pointer bits starting at the address addr.
// number of bits in buf that are valid (including low)
Flush the bits that have been written, and add zeros as needed
to cover the full object [addr, addr+size).
Add padding of size bytes.
write appends the pointerness of the next valid pointer slots
using the low valid bits of bits. 1=pointer, 0=scalar.
func writeHeapBitsForAddr(addr uintptr) (h writeHeapBits)
Package-Level Functions (total 1602, in which 33 are exported)
BlockProfile returns n, the number of records in the current blocking profile.
If len(p) >= n, BlockProfile copies the profile into p and returns n, true.
If len(p) < n, BlockProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package or
the testing package's -test.blockprofile flag instead
of calling BlockProfile directly.
Breakpoint executes a breakpoint trap.
Caller reports file and line number information about function invocations on
the calling goroutine's stack. The argument skip is the number of stack frames
to ascend, with 0 identifying the caller of Caller. (For historical reasons the
meaning of skip differs between Caller and Callers.) The return values report the
program counter, file name, and line number within the file of the corresponding
call. The boolean ok is false if it was not possible to recover the information.
Callers fills the slice pc with the return program counters of function invocations
on the calling goroutine's stack. The argument skip is the number of stack frames
to skip before recording in pc, with 0 identifying the frame for Callers itself and
1 identifying the caller of Callers.
It returns the number of entries written to pc.
To translate these PCs into symbolic information such as function
names and line numbers, use CallersFrames. CallersFrames accounts
for inlined functions and adjusts the return program counters into
call program counters. Iterating over the returned slice of PCs
directly is discouraged, as is using FuncForPC on any of the
returned PCs, since these cannot account for inlining or return
program counter adjustment.
CallersFrames takes a slice of PC values returned by Callers and
prepares to return function/file/line information.
Do not change the slice until you are done with the Frames.
CPUProfile panics.
It formerly provided raw access to chunks of
a pprof-format profile generated by the runtime.
The details of generating that format have changed,
so this functionality has been removed.
Deprecated: Use the runtime/pprof package,
or the handlers in the net/http/pprof package,
or the testing package's -test.cpuprofile flag instead.
FuncForPC returns a *Func describing the function that contains the
given program counter address, or else nil.
If pc represents multiple functions because of inlining, it returns
the *Func describing the innermost function, but with an entry of
the outermost function.
GC runs a garbage collection and blocks the caller until the
garbage collection is complete. It may also block the entire
program.
Goexit terminates the goroutine that calls it. No other goroutine is affected.
Goexit runs all deferred calls before terminating the goroutine. Because Goexit
is not a panic, any recover calls in those deferred functions will return nil.
Calling Goexit from the main goroutine terminates that goroutine
without func main returning. Since func main has not returned,
the program continues execution of other goroutines.
If all other goroutines exit, the program crashes.
GOMAXPROCS sets the maximum number of CPUs that can be executing
simultaneously and returns the previous setting. It defaults to
the value of runtime.NumCPU. If n < 1, it does not change the current setting.
This call will go away when the scheduler improves.
GOROOT returns the root of the Go tree. It uses the
GOROOT environment variable, if set at process start,
or else the root used during the Go build.
GoroutineProfile returns n, the number of records in the active goroutine stack profile.
If len(p) >= n, GoroutineProfile copies the profile into p and returns n, true.
If len(p) < n, GoroutineProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package instead
of calling GoroutineProfile directly.
Gosched yields the processor, allowing other goroutines to run. It does not
suspend the current goroutine, so execution resumes automatically.
KeepAlive marks its argument as currently reachable.
This ensures that the object is not freed, and its finalizer is not run,
before the point in the program where KeepAlive is called.
A very simplified example showing where KeepAlive is required:
type File struct { d int }
d, err := syscall.Open("/file/path", syscall.O_RDONLY, 0)
// ... do something if err != nil ...
p := &File{d}
runtime.SetFinalizer(p, func(p *File) { syscall.Close(p.d) })
var buf [10]byte
n, err := syscall.Read(p.d, buf[:])
// Ensure p is not finalized until Read returns.
runtime.KeepAlive(p)
// No more uses of p after this point.
Without the KeepAlive call, the finalizer could run at the start of
syscall.Read, closing the file descriptor before syscall.Read makes
the actual system call.
Note: KeepAlive should only be used to prevent finalizers from
running prematurely. In particular, when used with unsafe.Pointer,
the rules for valid uses of unsafe.Pointer still apply.
LockOSThread wires the calling goroutine to its current operating system thread.
The calling goroutine will always execute in that thread,
and no other goroutine will execute in it,
until the calling goroutine has made as many calls to
UnlockOSThread as to LockOSThread.
If the calling goroutine exits without unlocking the thread,
the thread will be terminated.
All init functions are run on the startup thread. Calling LockOSThread
from an init function will cause the main function to be invoked on
that thread.
A goroutine should call LockOSThread before calling OS services or
non-Go library functions that depend on per-thread state.
MemProfile returns a profile of memory allocated and freed per allocation
site.
MemProfile returns n, the number of records in the current memory profile.
If len(p) >= n, MemProfile copies the profile into p and returns n, true.
If len(p) < n, MemProfile does not change p and returns n, false.
If inuseZero is true, the profile includes allocation records
where r.AllocBytes > 0 but r.AllocBytes == r.FreeBytes.
These are sites where memory was allocated, but it has all
been released back to the runtime.
The returned profile may be up to two garbage collection cycles old.
This is to avoid skewing the profile toward allocations; because
allocations happen in real time but frees are delayed until the garbage
collector performs sweeping, the profile only accounts for allocations
that have had a chance to be freed by the garbage collector.
Most clients should use the runtime/pprof package or
the testing package's -test.memprofile flag instead
of calling MemProfile directly.
MutexProfile returns n, the number of records in the current mutex profile.
If len(p) >= n, MutexProfile copies the profile into p and returns n, true.
Otherwise, MutexProfile does not change p, and returns n, false.
Most clients should use the runtime/pprof package
instead of calling MutexProfile directly.
NumCgoCall returns the number of cgo calls made by the current process.
NumCPU returns the number of logical CPUs usable by the current process.
The set of available CPUs is checked by querying the operating system
at process startup. Changes to operating system CPU allocation after
process startup are not reflected.
NumGoroutine returns the number of goroutines that currently exist.
ReadMemStats populates m with memory allocator statistics.
The returned memory allocator statistics are up to date as of the
call to ReadMemStats. This is in contrast with a heap profile,
which is a snapshot as of the most recently completed garbage
collection cycle.
ReadTrace returns the next chunk of binary tracing data, blocking until data
is available. If tracing is turned off and all the data accumulated while it
was on has been returned, ReadTrace returns nil. The caller must copy the
returned data before calling ReadTrace again.
ReadTrace must be called from one goroutine at a time.
SetBlockProfileRate controls the fraction of goroutine blocking events
that are reported in the blocking profile. The profiler aims to sample
an average of one blocking event per rate nanoseconds spent blocked.
To include every blocking event in the profile, pass rate = 1.
To turn off profiling entirely, pass rate <= 0.
SetCgoTraceback records three C functions to use to gather
traceback information from C code and to convert that traceback
information into symbolic information. These are used when printing
stack traces for a program that uses cgo.
The traceback and context functions may be called from a signal
handler, and must therefore use only async-signal safe functions.
The symbolizer function may be called while the program is
crashing, and so must be cautious about using memory. None of the
functions may call back into Go.
The context function will be called with a single argument, a
pointer to a struct:
struct {
Context uintptr
}
In C syntax, this struct will be
struct {
uintptr_t Context;
};
If the Context field is 0, the context function is being called to
record the current traceback context. It should record in the
Context field whatever information is needed about the current
point of execution to later produce a stack trace, probably the
stack pointer and PC. In this case the context function will be
called from C code.
If the Context field is not 0, then it is a value returned by a
previous call to the context function. This case is called when the
context is no longer needed; that is, when the Go code is returning
to its C code caller. This permits the context function to release
any associated resources.
While it would be correct for the context function to record a
complete a stack trace whenever it is called, and simply copy that
out in the traceback function, in a typical program the context
function will be called many times without ever recording a
traceback for that context. Recording a complete stack trace in a
call to the context function is likely to be inefficient.
The traceback function will be called with a single argument, a
pointer to a struct:
struct {
Context uintptr
SigContext uintptr
Buf *uintptr
Max uintptr
}
In C syntax, this struct will be
struct {
uintptr_t Context;
uintptr_t SigContext;
uintptr_t* Buf;
uintptr_t Max;
};
The Context field will be zero to gather a traceback from the
current program execution point. In this case, the traceback
function will be called from C code.
Otherwise Context will be a value previously returned by a call to
the context function. The traceback function should gather a stack
trace from that saved point in the program execution. The traceback
function may be called from an execution thread other than the one
that recorded the context, but only when the context is known to be
valid and unchanging. The traceback function may also be called
deeper in the call stack on the same thread that recorded the
context. The traceback function may be called multiple times with
the same Context value; it will usually be appropriate to cache the
result, if possible, the first time this is called for a specific
context value.
If the traceback function is called from a signal handler on a Unix
system, SigContext will be the signal context argument passed to
the signal handler (a C ucontext_t* cast to uintptr_t). This may be
used to start tracing at the point where the signal occurred. If
the traceback function is not called from a signal handler,
SigContext will be zero.
Buf is where the traceback information should be stored. It should
be PC values, such that Buf[0] is the PC of the caller, Buf[1] is
the PC of that function's caller, and so on. Max is the maximum
number of entries to store. The function should store a zero to
indicate the top of the stack, or that the caller is on a different
stack, presumably a Go stack.
Unlike runtime.Callers, the PC values returned should, when passed
to the symbolizer function, return the file/line of the call
instruction. No additional subtraction is required or appropriate.
On all platforms, the traceback function is invoked when a call from
Go to C to Go requests a stack trace. On linux/amd64, linux/ppc64le,
linux/arm64, and freebsd/amd64, the traceback function is also invoked
when a signal is received by a thread that is executing a cgo call.
The traceback function should not make assumptions about when it is
called, as future versions of Go may make additional calls.
The symbolizer function will be called with a single argument, a
pointer to a struct:
struct {
PC uintptr // program counter to fetch information for
File *byte // file name (NUL terminated)
Lineno uintptr // line number
Func *byte // function name (NUL terminated)
Entry uintptr // function entry point
More uintptr // set non-zero if more info for this PC
Data uintptr // unused by runtime, available for function
}
In C syntax, this struct will be
struct {
uintptr_t PC;
char* File;
uintptr_t Lineno;
char* Func;
uintptr_t Entry;
uintptr_t More;
uintptr_t Data;
};
The PC field will be a value returned by a call to the traceback
function.
The first time the function is called for a particular traceback,
all the fields except PC will be 0. The function should fill in the
other fields if possible, setting them to 0/nil if the information
is not available. The Data field may be used to store any useful
information across calls. The More field should be set to non-zero
if there is more information for this PC, zero otherwise. If More
is set non-zero, the function will be called again with the same
PC, and may return different information (this is intended for use
with inlined functions). If More is zero, the function will be
called with the next PC value in the traceback. When the traceback
is complete, the function will be called once more with PC set to
zero; this may be used to free any information. Each call will
leave the fields of the struct set to the same values they had upon
return, except for the PC field when the More field is zero. The
function must not keep a copy of the struct pointer between calls.
When calling SetCgoTraceback, the version argument is the version
number of the structs that the functions expect to receive.
Currently this must be zero.
The symbolizer function may be nil, in which case the results of
the traceback function will be displayed as numbers. If the
traceback function is nil, the symbolizer function will never be
called. The context function may be nil, in which case the
traceback function will only be called with the context field set
to zero. If the context function is nil, then calls from Go to C
to Go will not show a traceback for the C portion of the call stack.
SetCgoTraceback should be called only once, ideally from an init function.
SetCPUProfileRate sets the CPU profiling rate to hz samples per second.
If hz <= 0, SetCPUProfileRate turns off profiling.
If the profiler is on, the rate cannot be changed without first turning it off.
Most clients should use the runtime/pprof package or
the testing package's -test.cpuprofile flag instead of calling
SetCPUProfileRate directly.
SetFinalizer sets the finalizer associated with obj to the provided
finalizer function. When the garbage collector finds an unreachable block
with an associated finalizer, it clears the association and runs
finalizer(obj) in a separate goroutine. This makes obj reachable again,
but now without an associated finalizer. Assuming that SetFinalizer
is not called again, the next time the garbage collector sees
that obj is unreachable, it will free obj.
SetFinalizer(obj, nil) clears any finalizer associated with obj.
The argument obj must be a pointer to an object allocated by calling
new, by taking the address of a composite literal, or by taking the
address of a local variable.
The argument finalizer must be a function that takes a single argument
to which obj's type can be assigned, and can have arbitrary ignored return
values. If either of these is not true, SetFinalizer may abort the
program.
Finalizers are run in dependency order: if A points at B, both have
finalizers, and they are otherwise unreachable, only the finalizer
for A runs; once A is freed, the finalizer for B can run.
If a cyclic structure includes a block with a finalizer, that
cycle is not guaranteed to be garbage collected and the finalizer
is not guaranteed to run, because there is no ordering that
respects the dependencies.
The finalizer is scheduled to run at some arbitrary time after the
program can no longer reach the object to which obj points.
There is no guarantee that finalizers will run before a program exits,
so typically they are useful only for releasing non-memory resources
associated with an object during a long-running program.
For example, an os.File object could use a finalizer to close the
associated operating system file descriptor when a program discards
an os.File without calling Close, but it would be a mistake
to depend on a finalizer to flush an in-memory I/O buffer such as a
bufio.Writer, because the buffer would not be flushed at program exit.
It is not guaranteed that a finalizer will run if the size of *obj is
zero bytes, because it may share same address with other zero-size
objects in memory. See https://go.dev/ref/spec#Size_and_alignment_guarantees.
It is not guaranteed that a finalizer will run for objects allocated
in initializers for package-level variables. Such objects may be
linker-allocated, not heap-allocated.
Note that because finalizers may execute arbitrarily far into the future
after an object is no longer referenced, the runtime is allowed to perform
a space-saving optimization that batches objects together in a single
allocation slot. The finalizer for an unreferenced object in such an
allocation may never run if it always exists in the same batch as a
referenced object. Typically, this batching only happens for tiny
(on the order of 16 bytes or less) and pointer-free objects.
A finalizer may run as soon as an object becomes unreachable.
In order to use finalizers correctly, the program must ensure that
the object is reachable until it is no longer required.
Objects stored in global variables, or that can be found by tracing
pointers from a global variable, are reachable. For other objects,
pass the object to a call of the KeepAlive function to mark the
last point in the function where the object must be reachable.
For example, if p points to a struct, such as os.File, that contains
a file descriptor d, and p has a finalizer that closes that file
descriptor, and if the last use of p in a function is a call to
syscall.Write(p.d, buf, size), then p may be unreachable as soon as
the program enters syscall.Write. The finalizer may run at that moment,
closing p.d, causing syscall.Write to fail because it is writing to
a closed file descriptor (or, worse, to an entirely different
file descriptor opened by a different goroutine). To avoid this problem,
call KeepAlive(p) after the call to syscall.Write.
A single goroutine runs all finalizers for a program, sequentially.
If a finalizer must run for a long time, it should do so by starting
a new goroutine.
In the terminology of the Go memory model, a call
SetFinalizer(x, f) “synchronizes before” the finalization call f(x).
However, there is no guarantee that KeepAlive(x) or any other use of x
“synchronizes before” f(x), so in general a finalizer should use a mutex
or other synchronization mechanism if it needs to access mutable state in x.
For example, consider a finalizer that inspects a mutable field in x
that is modified from time to time in the main program before x
becomes unreachable and the finalizer is invoked.
The modifications in the main program and the inspection in the finalizer
need to use appropriate synchronization, such as mutexes or atomic updates,
to avoid read-write races.
SetMutexProfileFraction controls the fraction of mutex contention events
that are reported in the mutex profile. On average 1/rate events are
reported. The previous rate is returned.
To turn off profiling entirely, pass rate 0.
To just read the current rate, pass rate < 0.
(For n>1 the details of sampling may change.)
Stack formats a stack trace of the calling goroutine into buf
and returns the number of bytes written to buf.
If all is true, Stack formats stack traces of all other goroutines
into buf after the trace for the current goroutine.
StartTrace enables tracing for the current process.
While tracing, the data will be buffered and available via ReadTrace.
StartTrace returns an error if tracing is already enabled.
Most clients should use the runtime/trace package or the testing package's
-test.trace flag instead of calling StartTrace directly.
StopTrace stops tracing, if it was previously enabled.
StopTrace only returns after all the reads for the trace have completed.
ThreadCreateProfile returns n, the number of records in the thread creation profile.
If len(p) >= n, ThreadCreateProfile copies the profile into p and returns n, true.
If len(p) < n, ThreadCreateProfile does not change p and returns n, false.
Most clients should use the runtime/pprof package instead
of calling ThreadCreateProfile directly.
UnlockOSThread undoes an earlier call to LockOSThread.
If this drops the number of active LockOSThread calls on the
calling goroutine to zero, it unwires the calling goroutine from
its fixed operating system thread.
If there are no active LockOSThread calls, this is a no-op.
Before calling UnlockOSThread, the caller must ensure that the OS
thread is suitable for running other goroutines. If the caller made
any permanent changes to the state of the thread that would affect
other goroutines, it should not call this function and thus leave
the goroutine locked to the OS thread until the goroutine (and
hence the thread) exits.
Version returns the Go tree's version string.
It is either the commit hash and date at the time of the build or,
when possible, a release tag like "go1.3".
How to extract and insert information held in the st_info field.
func _ELF_ST_TYPE(val byte) byte
abort crashes the runtime in situations where even throw might not
work. In general it should do something a debugger will recognize
(e.g., an INT3 on x86). A crash in abort is recognized by the
signal handler, which will attempt to tear down the runtime
immediately.
abs returns the absolute value of x.
Special cases are:
abs(±Inf) = +Inf
abs(NaN) = NaN
Called from write_err_android.go only, but defined in sys_linux_*.s;
declared here (instead of in write_err_android.go) for go vet on non-android builds.
The return value is the raw syscall result, which may encode an error number.
This function may be called in nosplit context and thus must be nosplit.
Associate p and the current m.
This function is allowed to have write barriers even if the caller
isn't because it immediately acquires pp.
activeModules returns a slice of active modules.
A module is active once its gcdatamask and gcbssmask have been
assembled and it is usable by the GC.
This is nosplit/nowritebarrier because it is called by the
cgo pointer checking code.
Should be a built-in for unsafe.Pointer?
add1 returns the byte pointer p+1.
addAdjustedTimers adds any timers we adjusted in adjusttimers
back to the timer heap.
addb returns the byte pointer p+n.
addCovMeta is invoked during package "init" functions by the
compiler when compiling for coverage instrumentation; here 'p' is a
meta-data blob of length 'dlen' for the package in question, 'hash'
is a compiler-computed md5.sum for the blob, 'pkpath' is the
package path, 'pkid' is the hard-coded ID that the compiler is
using for the package (or -1 if the compiler doesn't think a
hard-coded ID is needed), and 'cmode'/'cgran' are the coverage
counter mode and granularity requested by the user. Return value is
the ID for the package for use by the package code itself.
addExitHook registers the specified function 'f' to be run at
program termination (e.g. when someone invokes os.Exit(), or when
main.main returns). Hooks are run in reverse order of registration:
first hook added is the last one run.
CAREFUL: the expectation is that addExitHook should only be called
from a safe context (e.g. not an error/panic path or signal
handler, preemption enabled, allocation allowed, write barriers
allowed, etc), and that the exit function 'f' will be invoked under
similar circumstances. That is the say, we are expecting that 'f'
uses normal / high-level Go code as opposed to one of the more
restricted dialects used for the trickier parts of the runtime.
Adds a newly allocated M to the extra M list.
Adds a finalizer to the object p. Returns true if it succeeded.
Called from linker-generated .initarray; declared for go vet; do NOT call from Go.
addOneOpenDeferFrame scans the stack (in gentraceback order, from inner frames to
outer frames) for the first frame (if any) with open-coded defers. If it finds
one, it adds a single entry to the defer chain for that frame. The entry added
represents all the defers in the associated open defer frame, and is sorted in
order with respect to any non-open-coded defers.
addOneOpenDeferFrame stops (possibly without adding a new entry) if it encounters
an in-progress open defer entry. An in-progress open defer entry means there has
been a new panic because of a defer in the associated frame. addOneOpenDeferFrame
does not add an open defer entry past a started entry, because that started entry
still needs to finished, and addOneOpenDeferFrame will be called when that started
entry is completed. The defer removal loop in gopanic() similarly stops at an
in-progress defer entry. Together, addOneOpenDeferFrame and the defer removal loop
ensure the invariant that there is no open defer entry further up the stack than
an in-progress defer, and also that the defer removal loop is guaranteed to remove
all not-in-progress open defer entries from the defer chain.
If sp is non-nil, addOneOpenDeferFrame starts the stack scan from the frame
specified by sp. If sp is nil, it uses the sp from the current defer record (which
has just been finished). Hence, it continues the stack scan from the frame of the
defer that just finished. It skips any frame that already has a (not-in-progress)
open-coded _defer record in the defer chain.
Note: All entries of the defer chain (including this new open-coded entry) have
their pointers (including sp) adjusted properly if the stack moves while
running deferred functions. Also, it is safe to pass in the sp arg (which is
the direct result of calling getcallersp()), because all pointer variables
(including arguments) are adjusted as needed during stack copies.
addrsToSummaryRange converts base and limit pointers into a range
of entries for the given summary level.
The returned range is inclusive on the lower bound and exclusive on
the upper bound.
Adds the special record s to the list of special records for
the object p. All fields of s should be filled in except for
offset & next, which this routine will fill in.
Returns true if the special was successfully added, false otherwise.
(The add will fail only if a record with the same p and s->kind
already exists.)
Note: this changes some unsynchronized operations to synchronized operations
addtimer adds a timer to the current P.
This should only be called with a newly created timer.
That avoids the risk of changing the when field of a timer in some P's heap,
which could cause the heap to become unsorted.
func adjustctxt(gp *g, adjinfo *adjustinfo) func adjustdefers(gp *g, adjinfo *adjustinfo)
Note: the argument/return area is adjusted by the callee.
func adjustpanics(gp *g, adjinfo *adjustinfo)
adjustpointer checks whether *vpp is in the old stack described by adjinfo.
If so, it rewrites *vpp to point into the new stack.
bv describes the memory starting at address scanp.
Adjust any pointers contained therein.
adjustSignalStack adjusts the current stack guard based on the
stack pointer that is actually in use while handling a signal.
We do this in case some non-Go code called sigaltstack.
This reports whether the stack was adjusted, and if so stores the old
signal stack in *gsigstack.
func adjustsudogs(gp *g, adjinfo *adjustinfo)
adjusttimers looks through the timers in the current P's heap for
any timers that have been modified to run earlier, and puts them in
the correct place in the heap. While looking for those timers,
it also moves timers that have been modified to run later,
and removes deleted timers. The caller must have locked the timers for pp.
func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr)
alignDown rounds n down to a multiple of a. a must be a power of 2.
alignUp rounds n up to a multiple of a. a must be a power of 2.
allGsSnapshot returns a snapshot of the slice of all Gs.
The world must be stopped or allglock must be held.
Allocate a new m unassociated with any thread.
Can use p for allocation context if needed.
fn is recorded as the new m's m.mstartfn.
id is optional pre-allocated m ID. Omit by passing -1.
This function is allowed to have write barriers even if the caller
isn't because it borrows pp.
arena_arena_Free is a wrapper around (*userArena).free.
arena_arena_New is a wrapper around (*userArena).new, except that typ
is an any (must be a *_type, still) and typ must be a type descriptor
for a pointer to the type to actually be allocated, i.e. pass a *T
to allocate a T. This is necessary because this function returns a *T.
arena_arena_Slice is a wrapper around (*userArena).slice.
arena_heapify takes a value that lives in an arena and makes a copy
of it on the heap. Values that don't live in an arena are returned unmodified.
arena_newArena is a wrapper around newUserArena.
arenaBase returns the low address of the region covered by heap
arena i.
arenaIndex returns the index into mheap_.arenas of the arena
containing metadata for p. This index combines of an index into the
L1 map and an index into the L2 map and should be used as
mheap_.arenas[ai.l1()][ai.l2()].
If p is outside the range of valid heap addresses, either l1() or
l2() will be out of bounds.
It is nosplit because it's called by spanOf and several other
nosplit functions.
nosplit for use in linux startup sysargs.
func asanpoison(addr unsafe.Pointer, sz uintptr) func asanregisterglobals(addr unsafe.Pointer, sz uintptr) func asanunpoison(addr unsafe.Pointer, sz uintptr) func asmcgocall(fn, arg unsafe.Pointer) int32 func asmcgocall_no_g(fn, arg unsafe.Pointer) func assertE2I(inter *interfacetype, t *_type) *itab func assertE2I2(inter *interfacetype, e eface) (r iface) func assertI2I(inter *interfacetype, tab *itab) *itab func assertI2I2(inter *interfacetype, i iface) (r iface) func assertLockHeld(l *mutex)
asyncPreempt saves all user registers and calls asyncPreempt2.
When stack scanning encounters an asyncPreempt frame, it scans that
frame and its parent frame conservatively.
asyncPreempt is implemented in assembly.
atoi is like atoi64 but for integers
that fit into an int.
atoi32 is like atoi but for integers
that fit into an int32.
atoi64 parses an int64 from a string s.
The bool result reports whether s is a number
representable by a value of type int64.
atomic_casPointer is the implementation of runtime/internal/UnsafePointer.CompareAndSwap
(like CompareAndSwapNoWB but with the write barrier).
atomic_storePointer is the implementation of runtime/internal/UnsafePointer.Store
(like StoreNoWB but with the write barrier).
atomicAllG returns &allgs[0] and len(allgs) for use with atomicAllGIndex.
atomicAllGIndex returns ptr[i] with the allgptr returned from atomicAllG.
atomicstorep performs *ptr = new atomically and invokes a write barrier.
atomicwb performs a write barrier before an atomic pointer write.
The caller should guard the call with "if writeBarrier.enabled".
called from assembly.
called from assembly.
badPointer throws bad pointer in heap panic.
This runs on a foreign stack, without an m or a g. No stack split.
badTimer is called if the timer data structures have been corrupted,
presumably due to racy use by the program. We panic here rather than
panicking due to invalid slice access while holding locks.
See issue #25686.
Background scavenger.
The background scavenger maintains the RSS of the application below
the line described by the proportional scavenging statistics in
the mheap struct.
Build a binary search tree with the n objects in the list
x.obj[idx], x.obj[idx+1], ..., x.next.obj[0], ...
Returns the root of that tree, and the buf+idx of the nth object after x.obj[idx].
(The first object that was not included in the binary search tree.)
If n == 0, returns nil, x.
blockableSig reports whether sig may be blocked by the signal mask.
We never want to block the signals marked _SigUnblock;
these are the synchronous signals that turn into a Go panic.
We never want to block the preemption signal if it is being used.
In a Go program--not a c-archive/c-shared--we never want to block
the signals marked _SigKill or _SigThrow, as otherwise it's possible
for all running threads to block them and delay their delivery until
we start a new thread. When linked into a C program we let the C code
decide on the disposition of those signals.
blockAlignSummaryRange aligns indices into the given level to that
level's block width (1 << levelBits[level]). It assumes lo is inclusive
and hi is exclusive, and so aligns them down and up respectively.
func blockevent(cycles int64, skip int)
blocksampled returns true for all events where cycles >= rate. Shorter
events have a cycles/rate random chance of returning true.
bool2int returns 0 if x is false or 1 if x is true.
bucketMask returns 1<<b - 1, optimized for code generation.
bucketShift returns 1<<b, optimized for code generation.
bulkBarrierBitmap executes write barriers for copying from [src,
src+size) to [dst, dst+size) using a 1-bit pointer bitmap. src is
assumed to start maskOffset bytes into the data covered by the
bitmap in bits (which may not be a multiple of 8).
This is used by bulkBarrierPreWrite for writes to data and BSS.
bulkBarrierPreWrite executes a write barrier
for every pointer slot in the memory range [src, src+size),
using pointer/scalar information from [dst, dst+size).
This executes the write barriers necessary before a memmove.
src, dst, and size must be pointer-aligned.
The range [dst, dst+size) must lie within a single object.
It does not perform the actual writes.
As a special case, src == 0 indicates that this is being used for a
memclr. bulkBarrierPreWrite will pass 0 for the src of each write
barrier.
Callers should call bulkBarrierPreWrite immediately before
calling memmove(dst, src, size). This function is marked nosplit
to avoid being preempted; the GC must not stop the goroutine
between the memmove and the execution of the barriers.
The caller is also responsible for cgo pointer checks if this
may be writing Go pointers into non-Go memory.
The pointer bitmap is not maintained for allocations containing
no pointers at all; any caller of bulkBarrierPreWrite must first
make sure the underlying allocation contains pointers, usually
by checking typ.PtrBytes.
Callers must perform cgo checks if goexperiment.CgoCheck2.
bulkBarrierPreWriteSrcOnly is like bulkBarrierPreWrite but
does not execute write barriers for [dst, dst+size).
In addition to the requirements of bulkBarrierPreWrite
callers need to ensure [dst, dst+size) is zeroed.
This is used for special cases where e.g. dst was just
created and zeroed with malloc.
func bytealg_MakeNoZero(len int) []byte func call1024(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call1048576(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call1073741824(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call128(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call131072(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call134217728(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)
in asm_*.s
not called directly; definitions here supply type information for traceback.
These must have the same signature (arg pointer map) as reflectcall.
func call16384(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call16777216(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call2048(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call2097152(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call256(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call262144(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call268435456(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call32(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call32768(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call33554432(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call4096(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call4194304(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call512(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call524288(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call536870912(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call64(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call65536(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call67108864(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call8192(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs) func call8388608(typ, fn, stackArgs unsafe.Pointer, stackArgsSize, stackRetOffset, frameSize uint32, regArgs *abi.RegArgs)
callCgoMmap calls the mmap function in the runtime/cgo package
using the GCC calling convention. It is implemented in assembly.
callCgoMunmap calls the munmap function in the runtime/cgo package
using the GCC calling convention. It is implemented in assembly.
callCgoSigaction calls the sigaction function in the runtime/cgo package
using the GCC calling convention. It is implemented in assembly.
callCgoSymbolizer calls the cgoSymbolizer function.
canpanic returns false if a signal should throw instead of
panicking.
canPreemptM reports whether mp is in a state that is safe to preempt.
It is nosplit because it has nosplit callers.
func cansemacquire(addr *uint32) bool
The Gscanstatuses are acting like locks and this releases them.
If it proves to be a performance hit we should be able to make these
simple atomic stores but for now we are going to throw if
we see an inconsistent state.
casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.
Returns old status. Cannot call casgstatus directly, because we are racing with an
async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,
it might have become Grunnable by the time we get to the cas. If we called casgstatus,
it would loop waiting for the status to go back to Gwaiting, which it never will.
casGFromPreempted attempts to transition gp from _Gpreempted to
_Gwaiting. If successful, the caller is responsible for
re-scheduling gp.
If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus
and casfrom_Gscanstatus instead.
casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that
put it in the Gscan state is finished.
casGToPreemptScan transitions gp from _Grunning to _Gscan|_Gpreempted.
TODO(austin): This is the only status operation that both changes
the status and locks the _Gscan bit. Rethink this.
casGToWaiting transitions gp from old to _Gwaiting, and sets the wait reason.
Use this over casgstatus when possible to ensure that a waitreason is set.
This will return false if the gp is not in the expected status and the cas fails.
This acts like a lock acquire while the casfromgstatus acts like a lock release.
bindm store the g0 of the current m into a thread-specific value.
We allocate a pthread per-thread variable using pthread_key_create,
to register a thread-exit-time destructor.
We are here setting the thread-specific value of the pthread key, to enable the destructor.
So that the pthread_key_destructor would dropm while the C thread is exiting.
And the saved g will be used in pthread_key_destructor,
since the g stored in the TLS by Go might be cleared in some platforms,
before the destructor invoked, so, we restore g by the stored g, before dropm.
We store g0 instead of m, to make the assembly code simpler,
since we need to restore g0 in runtime.cgocallback.
On systems without pthreads, like Windows, bindm shouldn't be used.
NOTE: this always runs without a P, so, nowritebarrierrec required.
Call from Go to C.
This must be nosplit because it's used for syscalls on some
platforms. Syscalls may have untyped arguments on the stack, so
it's not safe to grow or scan the stack.
Not all cgocallback frames are actually cgocallback,
so not all have these arguments. Mark them uintptr so that the GC
does not misinterpret memory when the arguments are not present.
cgocallback is not called from Go, only from crosscall2.
This in turn calls cgocallbackg, which is where we'll find
pointer-declared arguments.
When fn is nil (frame is saved g), call dropm instead,
this is used when the C thread is exiting.
Call from C back to Go. fn must point to an ABIInternal Go entry-point.
func cgocallbackg1(fn, frame unsafe.Pointer, ctxt uintptr)
cgoCheckArg is the real work of cgoCheckPointer. The argument p
is either a pointer to the value (of type t), or the value itself,
depending on indir. The top parameter is whether we are at the top
level, where Go pointers are allowed. Go pointers to pinned objects are
always allowed.
cgoCheckBits checks the block of memory at src, for up to size
bytes, and throws if it finds an unpinned Go pointer. The gcbits mark each
pointer value. The src pointer is off bytes into the gcbits.
cgoCheckMemmove is called when moving a block of memory.
It throws if the program is copying a block that contains an unpinned Go
pointer into non-Go memory.
This is called from generated code when GOEXPERIMENT=cgocheck2 is enabled.
cgoCheckMemmove2 is called when moving a block of memory.
dst and src point off bytes into the value to copy.
size is the number of bytes to copy.
It throws if the program is copying a block that contains an unpinned Go
pointer into non-Go memory.
cgoCheckPointer checks if the argument contains a Go pointer that
points to an unpinned Go pointer, and panics if it does.
cgoCheckPtrWrite is called whenever a pointer is stored into memory.
It throws if the program is storing an unpinned Go pointer into non-Go
memory.
This is called from generated code when GOEXPERIMENT=cgocheck2 is enabled.
cgoCheckResult is called to check the result parameter of an
exported Go function. It panics if the result is or contains a Go
pointer.
cgoCheckSliceCopy is called when copying n elements of a slice.
src and dst are pointers to the first element of the slice.
typ is the element type of the slice.
It throws if the program is copying slice elements that contain unpinned Go
pointers into non-Go memory.
cgoCheckTypedBlock checks the block of memory at src, for up to size bytes,
and throws if it finds an unpinned Go pointer. The type of the memory is typ,
and src is off bytes into that type.
cgoCheckUnknownPointer is called for an arbitrary pointer into Go
memory. It checks whether that Go memory contains any other
pointer into unpinned Go memory. If it does, we panic.
The return values are unused but useful to see in panic tracebacks.
cgoCheckUsingType is like cgoCheckTypedBlock, but is a last ditch
fall back to look for pointers in src using the type information.
We only use this when looking at a value on the stack when the type
uses a GC program, because otherwise it's more efficient to use the
GC bits. This is called on the system stack.
cgoContextPCs gets the PC values from a cgo traceback.
cgoInRange reports whether p is between start and end.
cgoIsGoPointer reports whether the pointer is a Go pointer--a
pointer to Go memory. We only care about Go memory that might
contain pointers.
called from (incomplete) assembly.
cgoUse is called by cgo-generated code (using go:linkname to get at
an unexported name). The calls serve two purposes:
1) they are opaque to escape analysis, so the argument is considered to
escape to the heap.
2) they keep the argument alive until the call site; the call is emitted after
the end of the (presumed) use of the argument by C.
cgoUse should not actually be called (see cgoAlwaysFalse).
chanbuf(c, i) is pointer to the i'th slot in the buffer.
chanrecv receives on channel c and writes the received data to ep.
ep may be nil, in which case received data is ignored.
If block == false and no elements are available, returns (false, false).
Otherwise, if c is closed, zeros *ep and returns (true, false).
Otherwise, fills in *ep with an element and returns (true, true).
A non-nil ep must point to the heap or the caller's stack.
entry points for <- c from compiled code.
* generic single channel send/recv
* If block is not nil,
* then the protocol will not
* sleep but return if it could
* not complete.
*
* sleep can wake up with g.param == nil
* when a channel involved in the sleep has
* been closed. it is easiest to loop and re-run
* the operation; we'll see that it's now closed.
entry point for c <- x from compiled code.
checkASM reports whether assembly runtime checks have passed.
Check for deadlock situation.
The check is based on number of running M's, if 0 -> deadlock.
sched.lock must be held.
Check for idle-priority GC, without a P on entry.
If some GC work, a P, and a worker G are all available, the P and G will be
returned. The returned P has not been wired yet.
sched.lock must be held.
checkptrBase returns the base address for the allocation containing
the address p.
Importantly, if p1 and p2 point into the same variable, then
checkptrBase(p1) == checkptrBase(p2). However, the converse/inverse
is not necessarily true as allocations can have trailing padding,
and multiple variables may be packed into a single allocation.
checkptrStraddles reports whether the first size-bytes of memory
addressed by ptr is known to straddle more than one Go allocation.
Check all Ps for a runnable G to steal.
On entry we have no P. If a G is available to steal and a P is available,
the P is returned which the caller should acquire and attempt to steal the
work to.
checkTimers runs any timers for the P that are ready.
If now is not 0 it is the current time.
It returns the passed time or the current time if now was passed as 0.
and the time when the next timer should run or 0 if there is no next timer,
and reports whether it ran any timers.
If the time when the next timer should run is not 0,
it is always larger than the returned time.
We pass now in and out to avoid extra calls of nanotime.
Check all Ps for a timer expiring sooner than pollUntil.
Returns updated pollUntil value.
chunkBase returns the base address of the palloc chunk at index ci.
chunkIndex returns the global index of the palloc chunk containing the
pointer p.
chunkPageIndex computes the index of the page that contains p,
relative to the chunk which contains p.
cleantimers cleans up the head of the timer queue. This speeds up
programs that create and delete timers; leaving them in the heap
slows down addtimer. Reports whether no timer problems were found.
The caller must have locked the timers for pp.
clearDeletedTimers removes all deleted timers from the P's timer heap.
This is used to avoid clogging up the heap if the program
starts a lot of long-running timers and then stops them.
For example, this can happen via context.WithTimeout.
This is the only function that walks through the entire timer heap,
other than moveTimers which only runs when the world is stopped.
The caller must have locked the timers for pp.
clearSignalHandlers clears all signal handlers that are not ignored
back to the default. This is called by the child after a fork, so that
we can enable the signal mask for the exec without worrying about
running a signal handler in the child.
clobberfree sets the memory content at x to bad content, for debugging
purposes.
func closeonexec(fd int32) func compute0(_ *statAggregate, out *metricValue)
computeRZlog computes the size of the redzone.
Refer to the implementation of the compiler-rt.
func concatstring2(buf *tmpBuf, a0, a1 string) string func concatstring3(buf *tmpBuf, a0, a1, a2 string) string func concatstring4(buf *tmpBuf, a0, a1, a2, a3 string) string func concatstring5(buf *tmpBuf, a0, a1, a2, a3, a4 string) string
concatstrings implements a Go string concatenation x+y+z+...
The operands are passed in the slice a.
If buf != nil, the compiler has determined that the result does not
escape the calling function, so the string data can be stored in buf
if small enough.
convI2I returns the new itab to be used for the destination value
when converting a value with itab src to the dst interface.
convT converts a value of type t, which is pointed to by v, to a pointer that can
be used as the second word of an interface value.
func convTslice(val []byte) (x unsafe.Pointer) func convTstring(val string) (x unsafe.Pointer)
copysign returns a value with the magnitude
of x and the sign of y.
Copies gp's stack to a new stack of a different size.
Caller must have changed gp status to Gcopystack.
countrunes returns the number of runes in s.
countSub subtracts two counts obtained from profIndex.dataCount or profIndex.tagCount,
assuming that they are no more than 2^29 apart (guaranteed since they are never more than
len(data) or len(tags) apart, respectively).
tagCount wraps at 2^30, while dataCount wraps at 2^32.
This function works for both.
cpuinit sets up CPU feature flags and calls internal/cpu.Initialize. env should be the complete
value of the GODEBUG environment variable.
careful: cputicks is not guaranteed to be monotonic! In particular, we have
noticed drift between cpus on certain os/arch combinations. See issue 8976.
create returns an fd to a write-only file.
debugCallCheck checks whether it is safe to inject a debugger
function call with return PC pc. If not, it returns a string
explaining why.
func debugCallPanicked(val any)
debugCallWrap starts a new goroutine to run a debug call and blocks
the calling goroutine. On the goroutine, it prepares to recover
panics from the debug call, and then calls the call dispatching
function at PC dispatch.
This must be deeply nosplit because there are untyped values on the
stack from debugCallV2.
debugCallWrap1 is the continuation of debugCallWrap on the callee
goroutine.
func debugCallWrap2(dispatch uintptr)
decoderune returns the non-ASCII rune at the start of
s[k:] and the index after the rune in s.
decoderune assumes that caller has checked that
the to be decoded rune is a non-ASCII rune.
If the string appears to be incomplete or decoding problems
are encountered (runeerror, k + 1) is returned to ensure
progress when decoderune is used to iterate over a string.
deductAssistCredit reduces the current G's assist credit
by size bytes, and assists the GC if necessary.
Caller must be preemptible.
Returns the G for which the assist credit was accounted.
deductSweepCredit deducts sweep credit for allocating a span of
size spanBytes. This must be performed *before* the span is
allocated to ensure the system has enough credit. If necessary, it
performs sweeping to prevent going in to debt. If the caller will
also sweep pages (e.g., for a large allocation), it can pass a
non-zero callerSweepPages to leave that many pages unswept.
deductSweepCredit makes a worst-case assumption that all spanBytes
bytes of the ultimately allocated span will be available for object
allocation.
deductSweepCredit is the core of the "proportional sweep" system.
It uses statistics gathered by the garbage collector to perform
enough sweeping so that all pages are swept during the concurrent
sweep phase between GC cycles.
mheap_ must NOT be locked.
deferCallSave calls fn() after saving the caller's pc and sp in the
panic record. This allows the runtime to return to the Goexit defer
processing loop, in the unusual case where the Goexit may be
bypassed by a successful recover.
This is marked as a wrapper by the compiler so it doesn't appear in
tracebacks.
Create a new deferred function fn, which has no arguments and results.
The compiler turns a defer statement into a call to this.
deferprocStack queues a new deferred function with a defer record on the stack.
The defer record must have its fn field initialized.
All other fields can contain junk.
Nosplit because of the uninitialized pointer fields on the stack.
deferreturn runs deferred functions for the caller's frame.
The compiler inserts a call to this at the end of any
function which calls defer.
deltimer deletes the timer t. It may be on some other P, so we can't
actually remove it from the timers heap. We can only mark it as deleted.
It will be removed in due course by the P whose heap it is on.
Reports whether the timer was removed before it was run.
dieFromSignal kills the program with a signal.
This provides the expected exit status for the shell.
This is only called with fatal signals expected to kill the process.
128/64 -> 64 quotient, 64 remainder.
adapted from hacker's delight
divRoundUp returns ceil(n / a).
dlog returns a debug logger. The caller can use methods on the
returned logger to add values, which will be space-separated in the
final output, much like println. The caller must call end() to
finish the message.
dlog can be used from highly-constrained corners of the runtime: it
is safe to use in the signal handler, from within the write
barrier, from within the stack implementation, and in places that
must be recursively nosplit.
This will be compiled away if built without the debuglog build tag.
However, argument construction may not be. If any of the arguments
are not literals or trivial expressions, consider protecting the
call with "if dlogEnabled".
doaddtimer adds t to the current P's heap.
The caller must have locked the timers for pp.
dodeltimer removes timer i from the current P's heap.
We are locked on the P when this is called.
It returns the smallest changed index in pp.timers.
The caller must have locked the timers for pp.
dodeltimer0 removes timer 0 from the current P's heap.
We are locked on the P when this is called.
It reports whether it saw no problems due to races.
The caller must have locked the timers for pp.
dolockOSThread is called by LockOSThread and lockOSThread below
after they modify m.locked. Do not allow preemption during this call,
or else the m might be different in this function than in the caller.
gp is the crashing g running on this M, but may be a user G, while getg() is
always g0.
doRecordGoroutineProfile writes gp1's call stack and labels to an in-progress
goroutine profile. Preemption is disabled.
This may be called via tryRecordGoroutineProfile in two ways: by the
goroutine that is coordinating the goroutine profile (running on its own
stack), or from the scheduler in preparation to execute gp1 (running on the
system stack).
doSigPreempt handles a preemption signal on gp.
dounlockOSThread is called by UnlockOSThread and unlockOSThread below
after they update m->locked. Do not allow preemption during this call,
or else the m might be in different in this function than in the caller.
dropg removes the association between m and the current goroutine m->curg (gp for short).
Typically a caller sets gp's status away from Grunning and then
immediately calls dropg to finish the job. The caller is also responsible
for arranging that gp will be restarted using ready at an
appropriate time. After calling dropg and arranging for gp to be
readied later, the caller can do other work but eventually should
call schedule to restart the scheduling of goroutines on this m.
dropm puts the current m back onto the extra list.
1. On systems without pthreads, like Windows
dropm is called when a cgo callback has called needm but is now
done with the callback and returning back into the non-Go thread.
The main expense here is the call to signalstack to release the
m's signal stack, and then the call to needm on the next callback
from this thread. It is tempting to try to save the m for next time,
which would eliminate both these costs, but there might not be
a next time: the current thread (which Go does not control) might exit.
If we saved the m for that thread, there would be an m leak each time
such a thread exited. Instead, we acquire and release an m on each
call. These should typically not be scheduling operations, just a few
atomics, so the cost should be small.
2. On systems with pthreads
dropm is called while a non-Go thread is exiting.
We allocate a pthread per-thread variable using pthread_key_create,
to register a thread-exit-time destructor.
And store the g into a thread-specific value associated with the pthread key,
when first return back to C.
So that the destructor would invoke dropm while the non-Go thread is exiting.
This is much faster since it avoids expensive signal-related syscalls.
NOTE: this always runs without a P, so, nowritebarrierrec required.
dump kinds & offsets of interesting fields in bv.
dumpint() the kind & offset of each field in an object.
func dumpGCProg(p *byte) func dumpgoroutine(gp *g) func dumpgstatus(gp *g)
dump a uint64 in a varint format parseable by encoding/binary.
dump varint uint64 length followed by memory contents.
func dumpmemstats(m *MemStats)
dump an object.
func dumpotherroot(description string, to unsafe.Pointer)
dump information for a type.
func dwritebyte(b byte)
elideWrapperCalling reports whether a wrapper function that called
function id should be elided from stack traces.
empty reports whether a read from c would block (that is, the channel is
empty). It uses a single atomic read of mutable state.
enableWER is called by setTraceback("wer").
Windows Error Reporting (WER) is only supported on Windows.
encoderune writes into p (which must be large enough) the UTF-8 encoding of the rune.
It returns the number of bytes written.
endCheckmarks ends the checkmarks phase.
ensureSigM starts one global, sleeping thread to make sure at least one thread
is available to catch signals enabled for os/signal.
Standard syscall entry used by the go syscall library and normal cgo calls.
This is exported via linkname to assembly in the syscall package and x/sys.
The same as entersyscall(), but with a hint that the syscall is blocking.
envKeyEqual reports whether a == b, with ASCII-only case insensitivity
on Windows. The two strings must have the same length.
func evacuate_fast32(t *maptype, h *hmap, oldbucket uintptr) func evacuate_fast64(t *maptype, h *hmap, oldbucket uintptr) func evacuate_faststr(t *maptype, h *hmap, oldbucket uintptr)
Schedules gp to run on the current M.
If inheritTime is true, gp inherits the remaining time in the
current time slice. Otherwise, it starts a new time slice.
Never returns.
Write barriers are allowed because this is called immediately after
acquiring a P in several places.
The goroutine g exited its system call.
Arrange for it to run on a cpu again.
This is called only from the go syscall library, not
from the low-level system calls used by the runtime.
Write barriers are not allowed because our P may have been stolen.
This is exported via linkname to assembly in the syscall package.
exitsyscall slow path on g0.
Failed to acquire P, enqueue gp as runnable.
Called via mcall, so gp is the calling g from this M.
func exitsyscallfast(oldp *p) bool
exitsyscallfast_reacquired is the exitsyscall path on which this G
has successfully reacquired the P it was running on before the
syscall.
exitThread terminates the current thread, writing *wait = freeMStack when
the stack is safe to reclaim.
expandCgoFrames expands frame information for pc, known to be
a non-Go function, using the cgoSymbolizer hook. expandCgoFrames
returns nil if pc could not be expanded.
extendRandom extends the random numbers in r[:n] to the whole slice r.
Treats n<0 as n==0.
Type Parameters:
F: floaty
fastexprand returns a random number from an exponential distribution with
the specified mean.
fastlog2 implements a fast approximation to the base 2 log of a
float64. This is used to compute a geometric distribution for heap
sampling, without introducing dependencies into package math. This
uses a very rough approximation using the float64 exponent and the
first 25 bits of the mantissa. The top 5 bits of the mantissa are
used to load limits from a table of constants and the rest are used
to scale linearly between them.
fatal triggers a fatal error that dumps a stack trace and exits.
fatal is equivalent to throw, but is used when user code is expected to be
at fault for the failure, such as racing map writes.
fatal does not include runtime frames, system goroutines, or frame metadata
(fp, sp, pc) in the stack trace unless GOTRACEBACK=system or higher.
fatalpanic implements an unrecoverable panic. It is like fatalthrow, except
that if msgs != nil, fatalpanic also prints panic messages and decrements
runningPanicDefers once main is blocked from exiting.
fatalthrow implements an unrecoverable runtime throw. It freezes the
system, prints stack traces starting from its caller, and terminates the
process.
fillAligned returns x but with all zeroes in m-aligned
groups of m bits set to 1 if any bit in the group is non-zero.
For example, fillAligned(0x0100a3, 8) == 0xff00ff.
Note that if m == 1, this is a no-op.
m must be a power of 2 <= maxPagesPerPhysPage.
findBitRange64 returns the bit index of the first set of
n consecutive 1 bits. If no consecutive set of 1 bits of
size n may be found in c, then it returns an integer >= 64.
n must be > 0.
findfunc looks up function metadata for a PC.
It is nosplit because it's part of the isgoexception
implementation.
findmoduledatap looks up the moduledata for a PC.
It is nosplit because it's part of the isgoexception
implementation.
findObject returns the base address for the heap object containing
the address p, the object's span, and the index of the object in s.
If p does not point into a heap object, it returns base == 0.
If p points is an invalid heap pointer and debug.invalidptr != 0,
findObject panics.
refBase and refOff optionally give the base address of the object
in which the pointer p was found and the byte offset at which it
was found. These are used for error reporting.
It is nosplit so it is safe for p to be a pointer to the current goroutine's stack.
Since p is a uintptr, it would not be adjusted if the stack were to move.
Finds a runnable goroutine to execute.
Tries to steal from other P's, get g from local or global queue, poll network.
tryWakeP indicates that the returned goroutine is not normal (GC worker, trace
reader) so the caller should try to wake a P.
finishsweep_m ensures that all spans are swept.
The world must be stopped. This ensures there are no sweeps in
progress.
float64bits returns the IEEE 754 binary representation of f.
float64frombits returns the floating point number corresponding
the IEEE 754 binary representation b.
flushallmcaches flushes the mcaches of all Ps.
The world must be stopped.
flushmcache flushes the mcache of allp[i].
The world must be stopped.
Type Parameters:
F: floaty
Type Parameters:
F: floaty
fmtNSAsMS nicely formats ns nanoseconds as milliseconds.
Type Parameters:
F: floaty
forEachG calls fn on every G from allgs.
forEachG takes a lock to exclude concurrent addition of new Gs.
forEachGRace calls fn on every G from allgs.
forEachGRace avoids locking, but does not exclude addition of new Gs during
execution, which may be missed.
forEachP calls fn(p) for every P p when p reaches a GC safe point.
If a P is currently executing code, this will bring the P to a GC
safe point and execute fn on that P. If the P is not executing code
(it is idle or in a syscall), this will call fn(p) directly while
preventing the P from exiting its state. This does not ensure that
fn will run on every CPU executing Go code, but it acts as a global
memory barrier. GC uses this as a "ragged barrier."
The caller must hold worldsema.
fpTracebackPCs populates pcBuf with the return addresses for each frame and
returns the number of PCs written to pcBuf. The returned PCs correspond to
"physical frames" rather than "logical frames"; that is if A is inlined into
B, this will return a PC for only B.
fpunwindExpand checks if pcBuf contains logical frames (which include inlined
frames) or physical frames (produced by frame pointer unwinding) using a
sentinel value in pcBuf[0]. Logical frames are simply returned without the
sentinel. Physical frames are turned into logical frames via inline unwinding
and by applying the skip value that's stored in pcBuf[0].
Free the given defer.
The defer cannot be used after this call.
This is nosplit because the incoming defer is in a perilous state.
It's not on any defer list, so stack copying won't adjust stack
pointers in it (namely, d.link). Hence, if we were to copy the
stack, d could then contain a stale pointer.
Separate function so that it can split stack.
Windows otherwise runs out of stack space.
freemcache releases resources associated with this
mcache and puts the object onto a free list.
In some cases there is no way to simply release
resources, such as statistics, so donate them to
a different mcache (the recipient).
freeSomeWbufs frees some workbufs back to the heap and returns
true if it should be called again to free more.
freeSpecial performs any cleanup on special s and deallocates it.
s must already be unlinked from the specials list.
freeStackSpans frees unused stack spans at the end of GC.
freeUserArenaChunk releases the user arena represented by s back to the runtime.
x must be a live pointer within s.
The runtime will set the user arena to fault once it's safe (the GC is no longer running)
and then once the user arena is no longer referenced by the application, will allow it to
be reused.
Similar to stopTheWorld but best-effort and can be called several times.
There is no reverse operation, used during crashing.
This function must not lock any mutexes.
full reports whether a send on c would block (that is, the channel is full).
It uses a single word-sized read of mutable state, so although
the answer is instantaneously true, the correct answer may have changed
by the time the calling function receives the return value.
funcdata returns a pointer to the ith funcdata for f.
funcdata should be kept in sync with cmd/link:writeFuncs.
funcMaxSPDelta returns the maximum spdelta at any point in f.
funcNameForPrint returns the function name for printing to the user.
funcNamePiecesForPrint returns the function name for printing to the user.
It returns three pieces so it doesn't need an allocation for string
concatenation.
func funcspdelta(f funcInfo, targetpc uintptr, cache *pcvalueCache) int32
Atomically,
if(*addr == val) sleep
Might be woken up spuriously; that's allowed.
Don't sleep longer than ns; ns < 0 means forever.
If any procs are sleeping on addr, wake up at most cnt.
gcAssistAlloc performs GC work to make gp's assist debt positive.
gp must be the calling user goroutine.
This must be called with preemption enabled.
gcAssistAlloc1 is the part of gcAssistAlloc that runs on the system
stack. This is a separate function to make it easier to see that
we're not capturing anything from the user stack, since the user
stack may move while we're in this function.
gcAssistAlloc1 indicates whether this assist completed the mark
phase by setting gp.param to non-nil. This can't be communicated on
the stack since it may move.
gcBgMarkPrepare sets up state for background marking.
Mutator assists must not yet be enabled.
gcBgMarkStartWorkers prepares background mark worker goroutines. These
goroutines will not run until the mark phase, but they must be started while
the work is not stopped and from a regular G stack. The caller must hold
worldsema.
gcControllerCommit is gcController.commit, but passes arguments from live
(non-test) data. It also updates any consumers of the GC pacing, such as
sweep pacing and the background scavenger.
Calls gcController.commit.
The heap lock must be held, so this must be executed on the system stack.
gcDrain scans roots and objects in work buffers, blackening grey
objects until it is unable to get more work. It may return before
GC is done; it's the caller's responsibility to balance work from
other Ps.
If flags&gcDrainUntilPreempt != 0, gcDrain returns when g.preempt
is set.
If flags&gcDrainIdle != 0, gcDrain returns when there is other work
to do.
If flags&gcDrainFractional != 0, gcDrain self-preempts when
pollFractionalWorkerExit() returns true. This implies
gcDrainNoBlock.
If flags&gcDrainFlushBgCredit != 0, gcDrain flushes scan work
credit to gcController.bgScanCredit every gcCreditSlack units of
scan work.
gcDrain will always return if there is a pending STW.
gcDrainN blackens grey objects until it has performed roughly
scanWork units of scan work or the G is preempted. This is
best-effort, so it may perform less work if it fails to get a work
buffer. Otherwise, it will perform at least n units of work, but
may perform more because scanning is always done in whole object
increments. It returns the amount of scan work performed.
The caller goroutine must be in a preemptible state (e.g.,
_Gwaiting) to prevent deadlocks during stack scanning. As a
consequence, this must be called on the system stack.
gcDumpObject dumps the contents of obj for debugging and marks the
field at byte offset off in obj.
gcenable is called after the bulk of the runtime initialization,
just before we're about to start letting user code run.
It kicks off the background sweeper goroutine, the background
scavenger goroutine, and enables GC.
gcFlushBgCredit flushes scanWork units of background scan work
credit. This first satisfies blocked assists on the
work.assistQueue and then flushes any remaining credit to
gcController.bgScanCredit.
Write barriers are disallowed because this is used by gcDrain after
it has ensured that all work is drained and this must preserve that
condition.
gcMark runs the mark (or, for concurrent GC, mark termination)
All gcWork caches must be empty.
STW is in effect at this point.
gcMarkDone transitions the GC from mark to mark termination if all
reachable objects have been marked (that is, there are no grey
objects and can be no more in the future). Otherwise, it flushes
all local work to the global queues where it can be discovered by
other workers.
This should be called when all local mark work has been drained and
there are no remaining workers. Specifically, when
work.nwait == work.nproc && !gcMarkWorkAvailable(p)
The calling context must be preemptible.
Flushing local work is important because idle Ps may have local
work queued. This is the only way to make that work visible and
drive GC to completion.
It is explicitly okay to have write barriers in this function. If
it does transition to mark termination, then all reachable objects
have been marked, so the write barrier cannot shade any more
objects.
gcmarknewobject marks a newly allocated object black. obj must
not contain any non-nil pointers.
This is nosplit so it can manipulate a gcWork without preemption.
gcMarkRootCheck checks that all roots have been scanned. It is
purely for debugging.
gcMarkRootPrepare queues root scanning jobs (stacks, globals, and
some miscellany) and initializes scanning-related state.
The world must be stopped.
World must be stopped and mark assists and background workers must be
disabled.
gcMarkTinyAllocs greys all active tiny alloc blocks.
The world must be stopped.
gcMarkWorkAvailable reports whether executing a mark worker
on p is potentially useful. p may be nil, in which case it only
checks the global sources of work.
gcPaceScavenger updates the scavenger's pacing, particularly
its rate and RSS goal. For this, it requires the current heapGoal,
and the heapGoal for the previous GC cycle.
The RSS goal is based on the current heap goal with a small overhead
to accommodate non-determinism in the allocator.
The pacing is based on scavengePageRate, which applies to both regular and
huge pages. See that constant for more information.
Must be called whenever GC pacing is updated.
mheap_.lock must be held or the world must be stopped.
gcPaceSweeper updates the sweeper's pacing parameters.
Must be called whenever the GC's pacing is updated.
The world must be stopped, or mheap_.lock must be held.
gcParkAssist puts the current goroutine on the assist queue and parks.
gcParkAssist reports whether the assist is now satisfied. If it
returns false, the caller must retry the assist.
gcResetMarkState resets global state prior to marking (concurrent
or STW) and resets the stack scan state of all Gs.
This is safe to do without the world stopped because any Gs created
during or after this will start out in the reset state.
gcResetMarkState must be called on the system stack because it acquires
the heap lock. See mheap for details.
gcStart starts the GC. It transitions from _GCoff to _GCmark (if
debug.gcstoptheworld == 0) or performs all of GC (if
debug.gcstoptheworld != 0).
This may return without performing this transition in some cases,
such as when called on a system stack or with locks held.
Stops the current m for stopTheWorld.
Returns when the world is restarted.
gcSweep must be called on the system stack because it acquires the heap
lock. See mheap for details.
The world must be stopped.
gcTestIsReachable performs a GC and returns a bit set where bit i
is set if ptrs[i] is reachable.
gcTestMoveStackOnNextCall causes the stack to be moved on a call
immediately following the call to this. It may not work correctly
if any other work appears after this call (such as returning).
Typically the following call should be marked go:noinline so it
performs a stack check.
In rare cases this may not cause the stack to move, specifically if
there's a preemption between this call and the next.
gcTestPointerClass returns the category of what p points to, one of:
"heap", "stack", "data", "bss", "other". This is useful for checking
that a test is doing what it's intended to do.
This is nosplit simply to avoid extra pointer shuffling that may
complicate a test.
gcWaitOnMark blocks until GC finishes the Nth mark phase. If GC has
already completed this mark phase, it returns immediately.
gcWakeAllAssists wakes all currently blocked assists. This is used
at the end of a GC cycle. gcBlackenEnabled must be false to prevent
new assists from going to sleep after this point.
Called from compiled code; declared for vet; do NOT call from Go.
Called from compiled code; declared for vet; do NOT call from Go.
getargp returns the location where the caller
writes outgoing function call arguments.
getclosureptr returns the pointer to the current closure.
getclosureptr can only be used in an assignment statement
at the entry of a function. Moreover, go:nosplit directive
must be specified at the declaration of caller function,
so that the function prolog does not clobber the closure register.
for example:
//go:nosplit
func f(arg1, arg2, arg3 int) {
dx := getclosureptr()
}
The compiler rewrites calls to this function into instructions that fetch the
pointer from a well-known register (DX on x86 architecture, etc.) directly.
getempty pops an empty work buffer off the work.empty list,
allocating new buffers if none are available.
Return an M from the extra M list. Returns last == true if the list becomes
empty because of this call.
Spins waiting for an extra M, so caller must ensure that the list always
contains or will soon contain at least one M.
getfp returns the frame pointer register of its caller or 0 if not implemented.
TODO: Make this a compiler intrinsic
getg returns the pointer to the current g.
The compiler rewrites calls to this function into instructions
that fetch the g directly (from TLS or from the dedicated register).
Returns GC type info for the pointer stored in ep for testing.
If ep points to the stack, only static live information will be returned
(i.e. not for objects which are only dynamically live stack objects).
getGodebugEarly extracts the environment variable GODEBUG from the environment on
Unix-like operating systems and returns it. This function exists to extract GODEBUG
early before much of the runtime is initialized.
func getLockRank(l *mutex) lockRank
A helper function for EnsureDropM.
getMCache is a convenience function which tries to obtain an mcache.
Returns nil if we're not bootstrapping or we don't have a P. The caller's
P must not change, so we must be in a non-preemptible state.
func getRandomData(r []byte)
Get from gfree list.
If local list is empty, grab a batch from global list.
Purge all cached G's from gfree list to the global list.
Put on gfree list.
If local list is too long, transfer a batch to the global list.
Try get a batch of G's from the global runnable queue.
sched.lock must be held.
Put gp on the global runnable queue.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
Put a batch of runnable goroutines on the global runnable queue.
This clears *batch.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
Put gp at the head of the global runnable queue.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
used by cmd/cgo
func godebug_registerMetric(name string, read func() uint64) func godebug_setNewIncNonDefault(newIncNonDefault func(string) func()) func godebug_setUpdate(update func(string, string)) func godebugNotify(envChanged bool)
goexit is the return stub at the top of every goroutine call stack.
Each goroutine stack is constructed as if goexit called the
goroutine's entry point function, so that when the entry point
function returns, it will return to goexit, which will call goexit1
to perform the actual exit.
This function must never be called directly. Call goexit1 instead.
gentraceback assumes that goexit terminates the stack. A direct
call on the stack will cause gentraceback to stop walking the stack
prematurely and if there is leftover state it may panic.
goexit continuation on g0.
Finishes execution of the current goroutine.
The implementation of the predeclared function panic.
failures in the comparisons for s[x], 0 <= x < y (y == len(s))
func goPanicIndexU(x uint, y int) func goPanicSlice3Acap(x int, y int) func goPanicSlice3AcapU(x uint, y int)
failures in the comparisons for s[::x], 0 <= x <= y (y == len(s) or cap(s))
func goPanicSlice3AlenU(x uint, y int)
failures in the comparisons for s[:x:y], 0 <= x <= y
func goPanicSlice3BU(x uint, y int)
failures in the comparisons for s[x:y:], 0 <= x <= y
func goPanicSlice3CU(x uint, y int) func goPanicSliceAcap(x int, y int) func goPanicSliceAcapU(x uint, y int)
failures in the comparisons for s[:x], 0 <= x <= y (y == len(s) or cap(s))
func goPanicSliceAlenU(x uint, y int)
failures in the comparisons for s[x:y], 0 <= x <= y
func goPanicSliceBU(x uint, y int)
failures in the conversion ([x]T)(s) or (*[x]T)(s), 0 <= x <= y, y == len(s)
Puts the current goroutine into a waiting state and calls unlockf on the
system stack.
If unlockf returns false, the goroutine is resumed.
unlockf must not access this G's stack, as it may be moved between
the call to gopark and the call to unlockf.
Note that because unlockf is called after putting the G into a waiting
state, the G may have already been readied by the time unlockf is called
unless there is external synchronization preventing the G from being
readied. If unlockf returns false, it must guarantee that the G cannot be
externally readied.
Reason explains why the goroutine has been parked. It is displayed in stack
traces and heap dumps. Reasons should be unique and descriptive. Do not
re-use reasons, add new ones.
Puts the current goroutine into a waiting state and unlocks the lock.
The goroutine can be made runnable again by calling goready(gp).
func gopreempt_m(gp *g)
The implementation of the predeclared function recover.
Cannot split the stack because it needs to reliably
find the stack segment of its caller.
TODO(rsc): Once we commit to CopyStackAlways,
this doesn't need to be nosplit.
func goroutineheader(gp *g)
labels may be nil. If labels is non-nil, it must have the same length as p.
func goroutineProfileWithLabelsConcurrent(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool) func goroutineProfileWithLabelsSync(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
Ready the goroutine arg.
Gosched continuation on g0.
goschedguarded yields the processor like gosched, but also checks
for forbidden states and opts out of the yield in those cases.
goschedguarded is a forbidden-states-avoided version of gosched_m.
goschedIfBusy yields the processor like gosched, but only does so if
there are no idle Ps or if we're on the only P and there's nothing in
the run queue. In both cases, there is freely available idle time.
func goschedImpl(gp *g)
adjust Gobuf as if it executed a call to fn with context ctxt
and then stopped before the first instruction in fn.
adjust Gobuf as if it executed a call to fn
and then stopped before the first instruction in fn.
This is exported via linkname to assembly in syscall (for Plan9).
func gostringnocopy(str *byte) string
gotraceback returns the current traceback settings.
If level is 0, suppress all tracebacks.
If level is 1, show tracebacks, but exclude runtime frames.
If level is 2, show tracebacks including runtime frames.
If all is set, print all goroutine stacks. Otherwise, print just the current goroutine.
If crash is set, crash (core dump, etc) after tracebacking.
goyield is like Gosched, but it:
- emits a GoPreempt trace event instead of a GoSched trace event
- puts the current G on the runq of the current P instead of the globrunq
obj is the start of an object with mark mbits.
If it isn't already marked, mark it and enqueue into gcw.
base and off are for debugging only and could be removed.
See also wbBufFlush1, which partially duplicates this logic.
growslice allocates new backing store for a slice.
arguments:
oldPtr = pointer to the slice's backing array
newLen = new length (= oldLen + num)
oldCap = original slice's capacity.
num = number of elements being added
et = element type
return values:
newPtr = pointer to the new backing store
newLen = same value as the argument
newCap = capacity of the new backing store
Requires that uint(newLen) > uint(oldCap).
Assumes the original slice length is newLen - num
A new backing store is allocated with space for at least newLen elements.
Existing entries [0, oldLen) are copied over to the new backing store.
Added entries [oldLen, newLen) are not initialized by growslice
(although for pointer-containing element types, they are zeroed). They
must be initialized by the caller.
Trailing entries [newLen, newCap) are zeroed.
growslice's odd calling convention makes the generated code that calls
this function simpler. In particular, it accepts and returns the
new length so that the old length is not live (does not need to be
spilled/restored) and the new length is returned (also does not need
to be spilled/restored).
func growWork_fast32(t *maptype, h *hmap, bucket uintptr) func growWork_fast64(t *maptype, h *hmap, bucket uintptr) func growWork_faststr(t *maptype, h *hmap, bucket uintptr)
write to goroutine-local buffer if diverting output,
or else standard error.
Hands off P from syscall or locked M.
Always runs without a P, so write barriers are not allowed.
heapBitsForAddr returns the heapBits for the address addr.
The caller must ensure [addr,addr+size) is in an allocated span.
In particular, be careful not to point past the end of an object.
nosplit because it is used during write barriers and must not be preempted.
heapBitsSetType records that the new allocation [x, x+size)
holds in [x, x+dataSize) one or more values of type typ.
(The number of values is given by dataSize / typ.Size.)
If dataSize < size, the fragment [x+dataSize, x+size) is
recorded as non-pointer data.
It is known that the type has pointers somewhere;
malloc does not call heapBitsSetType when there are no pointers,
because all free objects are marked as noscan during
heapBitsSweepSpan.
There can only be one allocation from a given span active at a time,
and the bitmap for a span always falls on word boundaries,
so there are no write-write races for access to the heap bitmap.
Hence, heapBitsSetType can access the bitmap without atomics.
There can be read-write races between heapBitsSetType and things
that read the heap bitmap like scanobject. However, since
heapBitsSetType is only used for objects that have not yet been
made reachable, readers will ignore bits being modified by this
function. This does mean this function cannot transiently modify
bits that belong to neighboring objects. Also, on weakly-ordered
machines, callers must execute a store/store (publication) barrier
between calling this function and making the object reachable.
heapObjectsCanMove always returns false in the current garbage collector.
It exists for go4.org/unsafe/assume-no-moving-gc, which is an
unfortunate idea that had an even more unfortunate implementation.
Every time a new Go release happened, the package stopped building,
and the authors had to add a new file with a new //go:build line, and
then the entire ecosystem of packages with that as a dependency had to
explicitly update to the new version. Many packages depend on
assume-no-moving-gc transitively, through paths like
inet.af/netaddr -> go4.org/intern -> assume-no-moving-gc.
This was causing a significant amount of friction around each new
release, so we added this bool for the package to //go:linkname
instead. The bool is still unfortunate, but it's not as bad as
breaking the ecosystem on every new release.
If the Go garbage collector ever does move heap objects, we can set
this to true to break all the programs using assume-no-moving-gc.
heapRetained returns an estimate of the current heap RSS.
hexdumpWords prints a word-oriented hex dump of [p, end).
If mark != nil, it will be called with each printed word's address
and should return a character mark to appear just before that
word's value. It can return 0 to indicate no mark.
inf2one returns a signed 1 if f is an infinity and a signed 0 otherwise.
The sign of the result is the sign of f.
inheap reports whether b is a pointer into a (potentially dead) heap object.
It returns false for pointers into mSpanManual spans.
Non-preemptible because it is used by write barriers.
inHeapOrStack is a variant of inheap that returns true for pointers
into any allocated heap span.
start forcegc helper goroutine
initMetrics initializes the metrics map if it hasn't been yet.
metricsSema must be held.
func initPageTrace(env string)
Initialize signals.
Called by libpreinit so runtime may not be initialized.
injectglist adds each runnable G on the list to some run queue,
and clears glist. If there is no current P, they are added to the
global queue, and up to npidle M's are started to run them.
Otherwise, for each idle P, this adds a G to the global queue
and starts an M. Any remaining G's are added to the current P's
local run queue.
This may temporarily acquire sched.lock.
Can run concurrently with GC.
inPersistentAlloc reports whether p points to memory allocated by
persistentalloc. This must be nosplit because it is called by the
cgo checker code, which is called by the write barrier code.
inRange reports whether v0 or v1 are in the range [r0, r1].
func interequal(p, q unsafe.Pointer) bool
internal_syscall_gostring is a version of gostring for internal/syscall/unix.
inUserArenaChunk returns true if p points to a user arena chunk.
vdsoMarker reports whether PC is on the VDSO page.
isAbortPC reports whether pc is the program counter at which
runtime.abort raises a signal.
It is nosplit because it's part of the isgoexception
implementation.
isAsyncSafePoint reports whether gp at instruction PC is an
asynchronous safe point. This indicates that:
1. It's safe to suspend gp and conservatively scan its stack and
registers. There are no potentially hidden pointer values and it's
not in the middle of an atomic sequence like a write barrier.
2. gp has enough stack space to inject the asyncPreempt call.
3. It's generally safe to interact with the runtime, even if we're
in a signal handler stopped here. For example, there are no runtime
locks held, so acquiring a runtime lock won't self-deadlock.
In some cases the PC is safe for asynchronous preemption but it
also needs to adjust the resumption PC. The new PC is returned in
the second result.
isDirectIface reports whether t is stored directly in an interface value.
isEmpty reports whether the given tophash array entry represents an empty bucket entry.
isExportedRuntime reports whether name is an exported runtime function.
It is only for runtime functions, so ASCII A-Z is fine.
TODO: this handles exported functions but not exported methods.
isFinite reports whether f is neither NaN nor an infinity.
isInf reports whether f is an infinity.
isNaN reports whether f is an IEEE 754 “not-a-number” value.
isPinned checks if a Go pointer is pinned.
nosplit, because it's called from nosplit code in cgocheck.
isShrinkStackSafe returns whether it's safe to attempt to shrink
gp's stack. Shrinking the stack is only safe when we have precise
pointer maps for all frames on the stack.
isSweepDone reports whether all spans are swept.
Note that this condition may transition from false to true at any
time as the sweeper runs. It may transition from true to false if a
GC runs; to prevent that the caller must be non-preemptible or must
somehow block GC progress.
isSystemGoroutine reports whether the goroutine g must be omitted
in stack dumps and deadlock detector. This is any goroutine that
starts at a runtime.* entry point, except for runtime.main,
runtime.handleAsyncEvent (wasm only) and sometimes runtime.runfinq.
If fixed is true, any goroutine that can vary between user and
system (that is, the finalizer goroutine) is considered a user
goroutine.
func itab_callback(tab *itab)
itabAdd adds the given itab to the itab hash table.
itabLock must be held.
func itabHashFunc(inter *interfacetype, typ *_type) uintptr func iterate_itabs(fn func(*itab))
itoa converts val to a decimal representation. The result is
written somewhere within buf and the location of the result is returned.
buf must be at least 20 bytes.
itoaDiv formats val/(10**dec) into buf.
We use the uintptr mutex.key and note.key as a uint32.
keys for implementing maps.keys
less checks if a < b, considering a & b running counts that may overflow the
32-bit range, and that their "unwrapped" difference is always less than 2^31.
levelIndexToOffAddr converts an index into summary[level] into
the corresponding address in the offset address space.
lfnodeValidate panics if node is not a valid address for use with
lfstack.push. This only needs to be called when node is allocated.
func lfstackPack(node *lfnode, cnt uintptr) uint64 func lfstackUnpack(val uint64) *lfnode
Called to do synchronous initialization of Go code built with
-buildmode=c-archive or -buildmode=c-shared.
None of the Go runtime is initialized.
lockextra locks the extra list and returns the list head.
The caller must unlock the list by storing a new list head
to extram. If nilokay is true, then lockextra will
return a nil list head if that's what it finds. If nilokay is false,
lockextra will keep waiting until the list head is no longer nil.
lockRankMayQueueFinalizer records the lock ranking effects of a
function that may call queuefinalizer.
lockRankMayTraceFlush records the lock ranking effects of a
potential call to traceFlush.
func lockWithRank(l *mutex, rank lockRank) func lockWithRankMayAcquire(l *mutex, rank lockRank) func lowerASCII(c byte) byte
return value is only set on linux to be used in osinit().
The main goroutine.
makeAddrRange creates a new address range from two virtual addresses.
Throws if the base and limit are not in the same memory segment.
makeBucketArray initializes a backing array for map buckets.
1<<b is the minimum number of buckets to allocate.
dirtyalloc should either be nil or a bucket array previously
allocated by makeBucketArray with the same t and b parameters.
If dirtyalloc is nil a new backing array will be alloced and
otherwise dirtyalloc will be cleared and reused as backing array.
func makechan64(t *chantype, size int64) *hchan
makeHeadTailIndex creates a headTailIndex value from a separate
head and tail.
func makeheapobjbv(p uintptr, size uintptr) bitvector
makeLimiterEventStamp creates a new stamp from the event type and the current timestamp.
makemap implements Go map creation for make(map[k]v, hint).
If the compiler has determined that the map or the first bucket
can be created on the stack, h and/or bucket may be non-nil.
If h != nil, the map can be created directly in h.
If h.buckets != nil, bucket pointed to can be used as the first bucket.
makemap_small implements Go map creation for make(map[k]v) and
make(map[k]v, hint) when hint is known to be at most bucketCnt
at compile time and the map needs to be allocated on the heap.
makeslicecopy allocates a slice of "tolen" elements of type "et",
then copies "fromlen" elements of type "et" into that new allocation from "from".
func makeSpanClass(sizeclass uint8, noscan bool) spanClass
makeStatDepSet creates a new statDepSet from a list of statDeps.
Allocate a new g, with a stack big enough for stacksize bytes.
Allocate an object of size bytes.
Small objects are allocated from the per-P cache's free lists.
Large objects (> 32 kB) are allocated straight from the heap.
mapaccess1 returns a pointer to h[key]. Never returns nil, instead
it will return a reference to the zero object for the elem type if
the key is not in the map.
NOTE: The returned pointer may keep the whole map live, so don't
hold onto it for very long.
returns both key and elem. Used by map iterator.
Like mapaccess, but allocates a slot for the key if it is not present in the map.
mapclear deletes all keys from a map.
mapclone for implementing maps.Clone
func mapdelete_fast32(t *maptype, h *hmap, key uint32) func mapdelete_fast64(t *maptype, h *hmap, key uint64) func mapdelete_faststr(t *maptype, h *hmap, ky string)
mapinitnoop is a no-op function known the Go linker; if a given global
map (of the right size) is determined to be dead, the linker will
rewrite the relocation (from the package init func) from the outlined
map init function to this symbol. Defined in assembly so as to avoid
complications with instrumentation (coverage, etc).
mapiterinit initializes the hiter struct used for ranging over maps.
The hiter struct pointed to by 'it' is allocated on the stack
by the compilers order pass or on the heap by reflect_mapiterinit.
Both need to have zeroed hiter since the struct contains pointers.
func mapiternext(it *hiter)
markBitsForSpan returns the markBits for the span base address base.
markroot scans the i'th root.
Preemption must be disabled (because this uses a gcWork).
Returns the amount of GC work credit produced by the operation.
If flushBgCredit is true, then that credit is also flushed
to the background credit pool.
nowritebarrier is only advisory here.
markrootBlock scans the shard'th shard of the block of memory [b0,
b0+n0), with the given pointer mask.
Returns the amount of work done.
markrootFreeGStacks frees stacks of dead Gs.
This does not free stacks of dead Gs cached on Ps, but having a few
cached stacks around isn't a problem.
markrootSpans marks roots for one shard of markArenas.
materializeGCProg allocates space for the (1-bit) pointer bitmask
for an object of size ptrdata. Then it fills that space with the
pointer bitmask specified by the program prog.
The bitmask starts at s.startAddr.
The result must be deallocated with dematerializeGCProg.
maxSearchAddr returns the maximum searchAddr value, which indicates
that the heap has no free space.
This function exists just to make it clear that this is the maximum address
for the page allocator's search space. See maxOffAddr for details.
It's a function (rather than a variable) because it needs to be
usable before package runtime's dynamic initialization is complete.
See #51913 for details.
mayMoreStackMove is a maymorestack hook that forces stack movement
at every possible point.
See mayMoreStackPreempt.
mayMoreStackPreempt is a maymorestack hook that forces a preemption
at every possible cooperative preemption point.
This is valuable to apply to the runtime, which can be sensitive to
preemption points. To apply this to all preemption points in the
runtime and runtime-like code, use the following in bash or zsh:
X=(-{gc,asm}flags={runtime/...,reflect,sync}=-d=maymorestack=runtime.mayMoreStackPreempt) GOFLAGS=${X[@]}
This must be deeply nosplit because it is called from a function
prologue before the stack is set up and because the compiler will
call it from any splittable prologue (leading to infinite
recursion).
Ideally it should also use very little stack because the linker
doesn't currently account for this in nosplit stack depth checking.
Ensure mayMoreStackPreempt can be called for all ABIs.
mcall switches from the g to the g0 stack and invokes fn(g),
where g is the goroutine that made the call.
mcall saves g's current PC/SP in g->sched so that it can be restored later.
It is up to fn to arrange for that later execution, typically by recording
g in a data structure, causing something to call ready(g) later.
mcall returns to the original goroutine g later, when g has been rescheduled.
fn must not return at all; typically it ends by calling schedule, to let the m
run other goroutines.
mcall can only be called from g stacks (not g0, not gsignal).
This must NOT be go:noescape: if fn is a stack-allocated closure,
fn puts g on a run queue, and g executes before fn returns, the
closure will be invalidated while it is still executing.
Pre-allocated ID may be passed as 'id', or omitted by passing -1.
Called from exitm, but not from drop, to undo the effect of thread-owned
resources in minit, semacreate, or elsewhere. Do not take locks after calling this.
memclrHasPointers clears n bytes of typed memory starting at ptr.
The caller must ensure that the type of the object at ptr has
pointers, usually by checking typ.PtrBytes. However, ptr
does not have to point to the start of the allocation.
memclrNoHeapPointers clears n bytes starting at ptr.
Usually you should use typedmemclr. memclrNoHeapPointers should be
used only when the caller knows that *ptr contains no heap pointers
because either:
*ptr is initialized memory and its type is pointer-free, or
*ptr is uninitialized memory (e.g., memory that's being reused
for a new allocation) and hence contains only "junk".
memclrNoHeapPointers ensures that if ptr is pointer-aligned, and n
is a multiple of the pointer size, then any pointer-aligned,
pointer-sized portion is cleared atomically. Despite the function
name, this is necessary because this function is the underlying
implementation of typedmemclr and memclrHasPointers. See the doc of
memmove for more details.
The (CPU-specific) implementations of this function are in memclr_*.s.
memclrNoHeapPointersChunked repeatedly calls memclrNoHeapPointers
on chunks of the buffer to be zeroed, with opportunities for preemption
along the way. memclrNoHeapPointers contains no safepoints and also
cannot be preemptively scheduled, so this provides a still-efficient
block copy that can also be preempted on a reasonable granularity.
Use this with care; if the data being cleared is tagged to contain
pointers, this allows the GC to run before it is all cleared.
in internal/bytealg/equal_*.s
func memequal128(p, q unsafe.Pointer) bool func memequal16(p, q unsafe.Pointer) bool func memequal32(p, q unsafe.Pointer) bool func memequal64(p, q unsafe.Pointer) bool func memequal_varlen(a, b unsafe.Pointer) bool
in asm_*.s
memmove copies n bytes from "from" to "to".
memmove ensures that any pointer in "from" is written to "to" with
an indivisible write, so that racy reads cannot observe a
half-written pointer. This is necessary to prevent the garbage
collector from observing invalid pointers, and differs from memmove
in unmanaged languages. However, memmove is only required to do
this if "from" and "to" may contain pointers, which can only be the
case if "from", "to", and "n" are all be word-aligned.
Implementations are in memmove_*.s.
mergeSummaries merges consecutive summaries which may each represent at
most 1 << logMaxPagesPerSum pages each together into one.
mexit tears down and exits the current thread.
Don't call this directly to exit the thread, since it must run at
the top of the thread stack. Instead, use gogo(&gp.m.g0.sched) to
unwind the stack to the point that exits the thread.
It is entered with m.p != nil, so write barriers are allowed. It
will release the P before exiting.
Try to get an m from midle list.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
Called to initialize a new m (including the bootstrap m).
Called on the new thread, cannot allocate memory.
minitSignalMask is called when initializing a new m to set the
thread's signal mask. When this is called all signals have been
blocked for the thread. This starts with m.sigmask, which was set
either from initSigmask for a newly created thread or by calling
sigsave if this is a non-Go thread calling a Go function. It
removes all essential signals from the mask, thus causing those
signals to not be blocked. Then it sets the thread's signal mask.
After this is called the thread can receive signals.
minitSignals is called when initializing a new m to set the
thread's alternate signal stack and signal mask.
minitSignalStack is called when initializing a new m to set the
alternate signal stack. If the alternate signal stack is not set
for the thread (the normal case) then set the alternate signal
stack to the gsignal stack. If the alternate signal stack is set
for the thread (the case when a non-Go thread sets the alternate
signal stack and then calls a Go function) then set the gsignal
stack to the alternate signal stack. We also set the alternate
signal stack to the gsignal stack if cgo is not used (regardless
of whether it is already set). Record which choice was made in
newSigstack, so that it can be undone in unminit.
mmap is used to route the mmap system call through C code when using cgo, to
support sanitizer interceptors. Don't allow stack splits, since this function
(used by sysAlloc) is called in a lot of low-level parts of the runtime and
callers often assume it won't acquire any locks.
modtimer modifies an existing timer.
This is called by the netpoll code or time.Ticker.Reset or time.Timer.Reset.
Reports whether the timer was modified before it was run.
modTimer modifies an existing timer.
func moduledataverify1(datap *moduledata)
modulesinit creates the active modules slice out of all loaded modules.
When a module is first loaded by the dynamic linker, an .init_array
function (written by cmd/link) is invoked to call addmoduledata,
appending to the module to the linked list that starts with
firstmoduledata.
There are two times this can happen in the lifecycle of a Go
program. First, if compiled with -linkshared, a number of modules
built with -buildmode=shared can be loaded at program initialization.
Second, a Go program can load a module while running that was built
with -buildmode=plugin.
After loading, this function is called which initializes the
moduledata so it is usable by the GC and creates a new activeModules
list.
Only one goroutine may call modulesinit at a time.
This is exported as ABI0 via linkname so obj can call it.
moveTimers moves a slice of timers to pp. The slice has been taken
from a different P.
This is currently called when the world is stopped, but the caller
is expected to have locked the timers for pp.
moveToBmap moves a bucket from src to dst. It returns the destination bucket or new destination bucket if it overflows
and the pos that the next key/value will be written, if pos == bucketCnt means needs to written in overflow bucket.
mPark causes a thread to park itself, returning once woken.
Called to initialize a new m (including the bootstrap m).
Called on the parent thread (main thread in case of bootstrap), can allocate memory.
mProf_Flush flushes the events from the current heap profiling
cycle into the active profile. After this it is safe to start a new
heap profiling cycle with mProf_NextCycle.
This is called by GC after mark termination starts the world. In
contrast with mProf_NextCycle, this is somewhat expensive, but safe
to do concurrently.
mProf_FlushLocked flushes the events from the heap profiling cycle at index
into the active profile. The caller must hold the lock for the active profile
(profMemActiveLock) and for the profiling cycle at index
(profMemFutureLock[index]).
Called when freeing a profiled block.
Called by malloc to record a profiled block.
mProf_NextCycle publishes the next heap profile cycle and creates a
fresh heap profile cycle. This operation is fast and can be done
during STW. The caller must call mProf_Flush before calling
mProf_NextCycle again.
This is called by mark termination during STW so allocations and
frees after the world is started again count towards a new heap
profiling cycle.
mProf_PostSweep records that all sweep frees for this GC cycle have
completed. This has the effect of publishing the heap profile
snapshot as of the last mark termination without advancing the heap
profile cycle.
Put mp on midle list.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
mReserveID returns the next ID to use for a new m. This new m is immediately
considered 'running' by checkdead.
sched.lock must be held.
func msanmalloc(addr unsafe.Pointer, sz uintptr)
msigrestore sets the current thread's signal mask to sigmask.
This is used to restore the non-Go signal mask when a non-Go thread
calls a Go function.
This is nosplit and nowritebarrierrec because it is called by dropm
after g has been cleared.
mStackIsSystemAllocated indicates whether this runtime starts on a
system-allocated stack.
mstart is the entry-point for new Ms.
It is written in assembly, uses ABI0, is marked TOPFRAME, and calls mstart0.
mstart0 is the Go entry-point for new Ms.
This must not split the stack because we may not even have stack
bounds set up yet.
May run during STW (because it doesn't have a P yet), so write
barriers are not allowed.
The go:noinline is to guarantee the getcallerpc/getcallersp below are safe,
so that we can set up g0.sched to return to the call of mstart1 above.
mstartm0 implements part of mstart1 that only runs on the m0.
Write barriers are allowed here because we know the GC can't be
running yet, so they'll be no-ops.
64x64 -> 128 multiply.
adapted from hacker's delight.
This is a wrapper over runtime/internal/math.MulUintptr,
so the compiler can recognize and treat it as an intrinsic.
func mutexevent(cycles int64, skip int)
Acquire an extra m and bind it to the C thread when a pthread key has been created.
needm is called when a cgo callback happens on a
thread without an m (a thread not created by Go).
In this case, needm is expected to find an m to use
and return with m, g initialized correctly.
Since m and g are not set now (likely nil, but see below)
needm is limited in what routines it can call. In particular
it can only call nosplit functions (textflag 7) and cannot
do any scheduling that requires an m.
In order to avoid needing heavy lifting here, we adopt
the following strategy: there is a stack of available m's
that can be stolen. Using compare-and-swap
to pop from the stack has ABA races, so we simulate
a lock by doing an exchange (via Casuintptr) to steal the stack
head and replace the top pointer with MLOCKED (1).
This serves as a simple spin lock that we can use even
without an m. The thread that locks the stack in this way
unlocks the stack by storing a valid stack head pointer.
In order to make sure that there is always an m structure
available to be stolen, we maintain the invariant that there
is always one more than needed. At the beginning of the
program (if cgo is in use) the list is seeded with a single m.
If needm finds that it has taken the last m off the list, its job
is - once it has installed its own m so that it can do things like
allocate memory - to create a spare m and put it on the list.
Each of these extra m's also has a g0 and a curg that are
pressed into service as the scheduling stack and current
goroutine for the duration of the cgo callback.
It calls dropm to put the m back on the list,
1. when the callback is done with the m in non-pthread platforms,
2. or when the C thread exiting on pthread platforms.
The signal argument indicates whether we're called from a signal
handler.
netpoll checks for ready network connections.
Returns list of goroutines that become runnable.
delay < 0: blocks indefinitely
delay == 0: does not block, just polls
delay > 0: block for up to that many nanoseconds
func netpollarm(pd *pollDesc, mode int)
returns true if IO is ready, or false if timed out or closed
waitio - wait only for completed IO, ignore errors
Concurrent calls to netpollblock in the same mode are forbidden, as pollDesc
can hold only a single waiting goroutine for each mode.
netpollBreak interrupts an epollwait.
func netpollcheckerr(pd *pollDesc, mode int32) int func netpollDeadline(arg any, seq uintptr) func netpolldeadlineimpl(pd *pollDesc, seq uintptr, read, write bool) func netpollgoready(gp *g, traceskip int) func netpollopen(fd uintptr, pd *pollDesc) uintptr func netpollReadDeadline(arg any, seq uintptr)
netpollready is called by the platform-specific netpoll function.
It declares that the fd associated with pd is ready for I/O.
The toRun argument is used to build a list of goroutines to return
from netpoll. The mode argument is 'r', 'w', or 'r'+'w' to indicate
whether the fd is ready for reading or writing or both.
This may run while the world is stopped, so write barriers are not allowed.
func netpollWriteDeadline(arg any, seq uintptr)
newAllocBits returns a pointer to 8 byte aligned bytes
to be used for this span's alloc bits.
newAllocBits is used to provide newly initialized spans
allocation bits. For spans not being initialized the
mark bits are repurposed as allocation bits when
the span is swept.
newArenaMayUnlock allocates and zeroes a gcBits arena.
The caller must hold gcBitsArena.lock. This may temporarily release it.
newarray allocates an array of n elements of type typ.
newBucket allocates a bucket with the given type and number of stack entries.
Allocate a Defer, usually using per-P pool.
Each defer must be released with freedefer. The defer is not
added to any defer chain yet.
newextram allocates m's and puts them on the extra list.
It is called with a working local m, so that it can do things
like call schedlock and allocate.
newInlineUnwinder creates an inlineUnwinder initially set to the inner-most
inlined frame at PC. PC should be a "call PC" (not a "return PC").
This unwinder uses non-strict handling of PC because it's assumed this is
only ever used for symbolic debugging. If things go really wrong, it'll just
fall back to the outermost frame.
Create a new m. It will start off with a call to fn, or else the scheduler.
fn needs to be static and not a heap allocated closure.
May run with m.p==nil, so write barriers are not allowed.
id is optional pre-allocated m ID. Omit by passing -1.
newMarkBits returns a pointer to 8 byte aligned bytes
to be used for a span's mark bits.
implementation of new builtin
compiler (both frontend and SSA backend) knows the signature
of this function.
May run with m.p==nil, so write barriers are not allowed.
Version of newosproc that doesn't require a valid G.
Create a new g running fn.
Put it on the queue of g's waiting to run.
The compiler turns a go statement into a call to this.
Create a new g in state _Grunnable, starting at fn. callerpc is the
address of the go statement that created this. The caller is responsible
for adding the new g to the scheduler.
newProfBuf returns a new profiling buffer with room for
a header of hdrsize words and a buffer of at least bufwords words.
func newSpecialsIter(span *mspan) specialsIter
Called from runtime·morestack when more stack is needed.
Allocate larger stack and relocate to new stack.
Stack growth is multiplicative, for constant amortized cost.
g->atomicstatus will be Grunning or Gscanrunning upon entry.
If the scheduler is trying to stop this g, then it will set preemptStop.
This must be nowritebarrierrec because it can be called as part of
stack growth from other nowritebarrierrec functions, but the
compiler doesn't check this.
newUserArena creates a new userArena ready to be used.
newUserArenaChunk allocates a user arena chunk, which maps to a single
heap arena and single span. Returns a pointer to the base of the chunk
(this is really important: we need to keep the chunk alive) and the span.
nextFreeFast returns the next free object if one is quickly available.
Otherwise it returns 0.
nextMarkBitArenaEpoch establishes a new epoch for the arenas
holding the mark bits. The arenas are named relative to the
current GC cycle which is demarcated by the call to finishweep_m.
All current spans have been swept.
During that sweep each span allocated room for its gcmarkBits in
gcBitsArenas.next block. gcBitsArenas.next becomes the gcBitsArenas.current
where the GC will mark objects and after each span is swept these bits
will be used to allocate objects.
gcBitsArenas.current becomes gcBitsArenas.previous where the span's
gcAllocBits live until all the spans have been swept during this GC cycle.
The span's sweep extinguishes all the references to gcBitsArenas.previous
by pointing gcAllocBits into the gcBitsArenas.current.
The gcBitsArenas.previous is released to the gcBitsArenas.free list.
nextSample returns the next sampling point for heap profiling. The goal is
to sample allocations on average every MemProfileRate bytes, but with a
completely random distribution over the allocation timeline; this
corresponds to a Poisson process with parameter MemProfileRate. In Poisson
processes, the distance between two samples follows the exponential
distribution (exp(MemProfileRate)), so the best return value is a random
number taken from an exponential distribution whose mean is MemProfileRate.
nextSampleNoFP is similar to nextSample, but uses older,
simpler code to avoid floating point.
func nilinterequal(p, q unsafe.Pointer) bool
nobarrierWakeTime looks at P's timers and returns the time when we
should wake up the netpoller. It returns 0 if there are no timers.
This function is invoked when dropping a P, and must run without
any write barriers.
noescape hides a pointer from escape analysis. noescape is
the identity function but escape analysis doesn't think the
output depends on the input. noescape is inlined and currently
compiles down to zero instructions.
USE CAREFULLY!
Type Parameters:
T: any
noEscapePtr hides a pointer from escape analysis. See noescape.
USE CAREFULLY!
func nonblockingPipe() (r, w int32, errno int32)
This is called when we receive a signal when there is no signal stack.
This can only happen if non-Go code calls sigaltstack to disable the
signal stack.
One-time notifications.
func notetsleep(n *note, ns int64) bool
May run with m.p==nil if called from notetsleep, so write barriers
are not allowed.
same as runtime·notetsleep, but called on user g (not g0)
calls only nosplit functions between entersyscallblock/exitsyscall.
func notewakeup(n *note)
notifyListAdd adds the caller to a notify list such that it can receive
notifications. The caller must eventually call notifyListWait to wait for
such a notification, passing the returned ticket number.
notifyListNotifyAll notifies all entries in the list.
notifyListNotifyOne notifies one entry in the list.
notifyListWait waits for a notification. If one has been sent since
notifyListAdd was called, it returns immediately. Otherwise, it blocks.
nsToSec takes a duration in nanoseconds and converts it to seconds as
a float64.
offAddrToLevelIndex converts an address in the offset address space
to the index into summary[level] containing addr.
oneNewExtraM allocates an m and puts it on the extra list.
os_beforeExit is called from os.Exit(0).
func osPreemptExtEnter(mp *m) func osPreemptExtExit(mp *m)
osRelax is called by the scheduler when transitioning to and from
all Ps being idle.
func osSetupTLS(mp *m)
osStackAlloc performs OS-specific initialization before s is used
as stack memory.
osStackFree undoes the effect of osStackAlloc before s is returned
to the heap.
overLoadFactor reports whether count items placed in 1<<B buckets is over loadFactor.
packPallocSum takes a start, max, and end value and produces a pallocSum.
pageIndexOf returns the arena, page index, and page mask for pointer p.
The caller must ensure p is in the heap.
func pageTraceAlloc(pp *p, now int64, base, npages uintptr) func pageTraceFree(pp *p, now int64, base, npages uintptr) func pageTraceScav(pp *p, now int64, base, npages uintptr)
Check to make sure we can really generate a panic. If the panic
was generated from the runtime, or from inside malloc, then convert
to a throw of msg.
pc should be the program counter of the compiler-generated code that
triggered this panic.
Same as above, but calling from the runtime is allowed.
Using this function is necessary for any panic that may be
generated by runtime.sigpanic, since those are always called by the
runtime.
panicdottypeE is called when doing an e.(T) conversion and the conversion fails.
have = the dynamic type we have.
want = the static type we're trying to convert to.
iface = the static type we're converting from.
panicdottypeI is called when doing an i.(T) conversion and the conversion fails.
Same args as panicdottypeE, but "have" is the dynamic itab we have.
Implemented in assembly, as they take arguments in registers.
Declared here to mark them as ABIInternal.
func panicIndexU(x uint, y int) func panicmemAddr(addr uintptr)
panicnildottype is called when doing an i.(T) conversion and the interface i is nil.
want = the static type we're trying to convert to.
func panicSlice3Acap(x int, y int) func panicSlice3AcapU(x uint, y int) func panicSlice3Alen(x int, y int) func panicSlice3AlenU(x uint, y int) func panicSlice3B(x int, y int) func panicSlice3BU(x uint, y int) func panicSlice3C(x int, y int) func panicSlice3CU(x uint, y int) func panicSliceAcap(x int, y int) func panicSliceAcapU(x uint, y int) func panicSliceAlen(x int, y int) func panicSliceAlenU(x uint, y int) func panicSliceB(x int, y int) func panicSliceBU(x uint, y int) func panicSliceConvert(x int, y int)
panicwrap generates a panic for a call to a wrapped value method
with a nil pointer receiver.
It is called from the generated wrapper code.
park continuation on g0.
parseByteCount parses a string that represents a count of bytes.
s must match the following regular expression:
^[0-9]+(([KMGT]i)?B)?$
In other words, an integer byte count with an optional unit
suffix. Acceptable suffixes include one of
- KiB, MiB, GiB, TiB which represent binary IEC/ISO 80000 units, or
- B, which just represents bytes.
Returns an int64 because that's what its callers want and receive,
but the result is always non-negative.
parsegodebug parses the godebug string, updating variables listed in dbgvars.
If seen == nil, this is startup time and we process the string left to right
overwriting older settings with newer ones.
If seen != nil, $GODEBUG has changed and we are doing an
incremental update. To avoid flapping in the case where a value is
set multiple times (perhaps in the default and the environment,
or perhaps twice in the environment), we process the string right-to-left
and only change values not already seen. After doing this for both
the environment and the default settings, the caller must also call
cleargodebug(seen) to reset any now-unset values back to their defaults.
func pcdatastart(f funcInfo, table uint32) uint32 func pcdatavalue(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache) int32 func pcdatavalue1(f funcInfo, table uint32, targetpc uintptr, cache *pcvalueCache, strict bool) int32
Like pcdatavalue, but also return the start PC of this PCData value.
It doesn't take a cache.
Returns the PCData value, and the PC where this value starts.
TODO: the start PC is returned only when cache is nil.
pcvalueCacheKey returns the outermost index in a pcvalueCache to use for targetpc.
It must be very cheap to calculate.
For now, align to goarch.PtrSize and reduce mod the number of entries.
In practice, this appears to be fairly randomly and evenly distributed.
Wrapper around sysAlloc that can allocate small chunks.
There is no associated free operation.
Intended for things like function/type/debug-related persistent data.
If align is 0, uses default align (currently 8).
The returned memory will be zeroed.
sysStat must be non-nil.
Consider marking persistentalloc'd types not in heap by embedding
runtime/internal/sys.NotInHeap.
Must run on system stack because stack growth can (re)invoke it.
See issue 9174.
pidleget tries to get a p from the _Pidle list, acquiring ownership.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
pidlegetSpinning tries to get a p from the _Pidle list, acquiring ownership.
This is called by spinning Ms (or callers than need a spinning M) that have
found work. If no P is available, this must synchronized with non-spinning
Ms that may be preparing to drop their P without discovering this work.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
pidleput puts p on the _Pidle list. now must be a relatively recent call
to nanotime or zero. Returns now or the current time if now was zero.
This releases ownership of p. Once sched.lock is released it is no longer
safe to use p.
sched.lock must be held.
May run during STW, so write barriers are not allowed.
only for tests
func pinnerGetPtr(i *any) unsafe.Pointer func plugin_lastmoduleinit() (path string, syms map[string]any, initTasks []*initTask, errstr string)
poll_runtime_isPollServerDescriptor reports whether fd is a
descriptor being used by netpoll.
func poll_runtime_pollOpen(fd uintptr) (*pollDesc, int)
poll_runtime_pollReset, which is internal/poll.runtime_pollReset,
prepares a descriptor for polling in mode, which is 'r' or 'w'.
This returns an error code; the codes are defined above.
func poll_runtime_pollSetDeadline(pd *pollDesc, d int64, mode int)
poll_runtime_pollWait, which is internal/poll.runtime_pollWait,
waits for a descriptor to be ready for reading or writing,
according to mode, which is 'r' or 'w'.
This returns an error code; the codes are defined above.
func poll_runtime_pollWaitCanceled(pd *pollDesc, mode int) func poll_runtime_Semacquire(addr *uint32) func poll_runtime_Semrelease(addr *uint32)
pollFractionalWorkerExit reports whether a fractional mark worker
should self-preempt. It assumes it is called from the fractional
worker.
pollWork reports whether there is non-background work this P could
be doing. This is a fairly lightweight check to be used for
background work loops, like idle GC. It checks a subset of the
conditions checked by the actual scheduler.
Tell all goroutines that they have been preempted and they should stop.
This function is purely best-effort. It can fail to inform a goroutine if a
processor just started running it.
No locks need to be held.
Returns true if preemption request was issued to at least one goroutine.
preemptM sends a preemption request to mp. This request may be
handled asynchronously and may be coalesced with other requests to
the M. When the request is received, if the running G or P are
marked for preemption and the goroutine is at an asynchronous
safe-point, it will preempt the goroutine. It always atomically
increments mp.preemptGen after handling a preemption request.
Tell the goroutine running on processor P to stop.
This function is purely best-effort. It can incorrectly fail to inform the
goroutine. It can inform the wrong goroutine. Even if it informs the
correct goroutine, that goroutine might ignore the request if it is
simultaneously executing newstack.
No lock needs to be held.
Returns true if preemption request was issued.
The actual preemption will happen at some point in the future
and will be indicated by the gp->status no longer being
Grunning
preemptPark parks gp and puts it in _Gpreempted.
prepareFreeWorkbufs moves busy workbuf spans to free list so they
can be freed to the heap. This must only be called when all
workbufs are on the empty list.
Call all Error and String methods before freezing the world.
Used when crashing with panicking.
printAncestorTraceback prints the traceback of the given ancestor.
TODO: Unify this with gentraceback and CallersFrames.
printAncestorTracebackFuncInfo prints the given function info at a given pc
within an ancestor traceback. The precision of this info is reduced
due to only have access to the pcs at the time of the caller
goroutine being created.
printany prints an argument passed to panic.
If panic is called with a value that has a String or Error method,
it has already been converted into a string by preprintpanics.
printArgs prints function arguments in traceback.
printCgoTraceback prints a traceback of callers.
func printcreatedby(gp *g) func printcreatedby1(f funcInfo, pc uintptr, goid uint64)
printDebugLog prints the debug log.
printDebugLogPC prints a single symbolized PC. If returnPC is true,
pc is a return PC that must first be converted to a call PC.
func printeface(e eface)
printFuncName prints a function name. name is the function name in
the binary's func data table.
func printiface(i iface)
printOneCgoTraceback prints the traceback of a single cgo caller.
This can print more than one line because of inlining.
It returns the "stop" result of commitFrame.
Print all currently active panics. Used when crashing.
Should only be called after preprintpanics.
printScavTrace prints a scavenge trace line to standard error.
released should be the amount of memory released since the last time this
was called, and forced indicates whether the scavenge was forced by the
application.
scavenger.lock must be held.
func printslice(s []byte)
Change number of processors.
sched.lock must be held, and the world must be stopped.
gcworkbufs must not be being modified by either the GC or the write barrier
code, so the GC must not be running if the number of Ps actually changes.
Returns list of Ps with local work, they need to be scheduled by the caller.
progToPointerMask returns the 1-bit pointer mask output by the GC program prog.
size the size of the region described by prog, in bytes.
The resulting bitvector will have no more than size/goarch.PtrSize bits.
publicationBarrier performs a store/store barrier (a "publication"
or "export" barrier). Some form of synchronization is required
between initializing an object and making that object accessible to
another processor. Without synchronization, the initialization
writes and the "publication" write may be reordered, allowing the
other processor to follow the pointer and observe an uninitialized
object. In general, higher-level synchronization should be used,
such as locking or an atomic pointer write. publicationBarrier is
for when those aren't an option, such as in the implementation of
the memory manager.
There's no corresponding barrier for the read side because the read
side naturally has a data dependency order. All architectures that
Go supports or seems likely to ever support automatically enforce
data dependency ordering.
putempty puts a workbuf onto the work.empty list.
Upon entry this goroutine owns b. The lfstack.push relinquishes ownership.
Returns an extra M back to the list. mp must be from getExtraM. Newly
allocated M's should use addExtraM.
putfull puts the workbuf on the work.full list for the GC.
putfull accepts partially full buffers so the GC can avoid competing
with the mutators for ownership of partially full buffers.
func raceacquire(addr unsafe.Pointer) func raceacquirectx(racectx uintptr, addr unsafe.Pointer) func raceacquireg(gp *g, addr unsafe.Pointer) func racectxend(racectx uintptr) func racemalloc(p unsafe.Pointer, sz uintptr) func racemapshadow(addr unsafe.Pointer, size uintptr)
Notify the race detector of a send or receive involving buffer entry idx
and a channel c or its communicating partner sg.
This function handles the special case of c.elemsize==0.
func raceprocdestroy(ctx uintptr) func racereadpc(addr unsafe.Pointer, callerpc, pc uintptr) func racereadrangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr) func racerelease(addr unsafe.Pointer) func racereleaseacquire(addr unsafe.Pointer) func racereleaseacquireg(gp *g, addr unsafe.Pointer) func racereleaseg(gp *g, addr unsafe.Pointer) func racereleasemerge(addr unsafe.Pointer) func racereleasemergeg(gp *g, addr unsafe.Pointer) func racewritepc(addr unsafe.Pointer, callerpc, pc uintptr) func racewriterangepc(addr unsafe.Pointer, sz, callerpc, pc uintptr)
raisebadsignal is called when a signal is received on a non-Go
thread, and the Go program does not want to handle it (that is, the
program has not called os/signal.Notify for the signal).
rawbyteslice allocates a new byte slice. The byte slice is not zeroed.
rawruneslice allocates a new rune slice. The rune slice is not zeroed.
rawstring allocates storage for a new string. The returned
string and byte slice both refer to the same storage.
The storage is not zeroed. Callers should use
b to set the string contents and then drop b.
read calls the read system call.
It returns a non-negative number of bytes written or a negative errno value.
func readGCStats(pauses *[]uint64)
readGCStats_m must be called on the system stack because it acquires the heap
lock. See mheap for details.
All reads and writes of g's status go through readgstatus, casgstatus
castogscanstatus, casfrom_Gscanstatus.
readmemstats_m populates stats for internal runtime values.
The world must be stopped.
readMetricNames is the implementation of runtime/metrics.readMetricNames,
used by the runtime/metrics test and otherwise unreferenced.
readMetrics is the implementation of runtime/metrics.Read.
readTrace0 is ReadTrace's continuation on g0. This must run on the
system stack because it acquires trace.lock.
Read the bytes starting at the aligned pointer p into a uintptr.
Read is little-endian.
Note: These routines perform the read with a native endianness.
readvarint reads a varint from p.
readvarintUnsafe reads the uint32 in varint format starting at fd, and returns the
uint32 and a pointer to the byte following the varint.
There is a similar function runtime.readvarint, which takes a slice of bytes,
rather than an unsafe pointer. These functions are duplicated, because one of
the two use cases for the functions would get slower if the functions were
combined.
Mark gp ready to run.
func readyWithTime(s *sudog, traceskip int)
Write b's data to r.
recordForPanic maintains a circular buffer of messages written by the
runtime leading up to a process crash, allowing the messages to be
extracted from a core dump.
The text written during a process crash (following "panic" or "fatal
error") is not saved, since the goroutine stacks will generally be readable
from the runtime data structures in the core file.
recordspan adds a newly allocated span to h.allspans.
This only happens the first time a span is allocated from
mheap.spanalloc (it is not called when a span is reused).
Write barriers are disallowed here because it can be called from
gcWork when allocating new workbufs. However, because it's an
indirect call from the fixalloc initializer, the compiler can't see
this.
The heap lock must be held.
Unwind the stack after a deferred function calls recover
after a panic. Then arrange to continue running as though
the caller of the deferred function returned normally.
recv processes a receive operation on a full channel c.
There are 2 parts:
1. The value sent by the sender sg is put into the channel
and the sender is woken up to go on its merry way.
2. The value received by the receiver (the current G) is
written to ep.
For synchronous channels, both values are the same.
For asynchronous channels, the receiver gets its data from
the channel buffer and the sender's data is put in the
channel buffer.
Channel c must be full and locked. recv unlocks c with unlockf.
sg must already be dequeued from c.
A non-nil ep must point to the heap or the caller's stack.
The goroutine g is about to enter a system call.
Record that it's not using the cpu anymore.
This is called only from the go syscall library and cgocall,
not from the low-level system calls used by the runtime.
Entersyscall cannot split the stack: the save must
make g->sched refer to the caller's stack segment, because
entersyscall is going to return immediately after.
Nothing entersyscall calls can split the stack either.
We cannot safely move the stack during an active call to syscall,
because we do not know which of the uintptr arguments are
really pointers (back into the stack).
In practice, this means that we make the fast path run through
entersyscall doing no-split things, and the slow path has to use systemstack
to run bigger things on the system stack.
reentersyscall is the entry point used by cgo callbacks, where explicitly
saved SP and PC are restored. This is needed when exitsyscall will be called
from a function further up in the call stack than the parent, as g->syscallsp
must always point to a valid stack frame. entersyscall below is the normal
entry point for syscalls, which obtains the SP and PC from the caller.
Syscall tracing:
At the start of a syscall we emit traceGoSysCall to capture the stack trace.
If the syscall does not block, that is it, we do not emit any other events.
If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;
when syscall returns we emit traceGoSysExit and when the goroutine starts running
(potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.
To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,
we remember current value of syscalltick in m (gp.m.syscalltick = gp.m.p.ptr().syscalltick),
whoever emits traceGoSysBlock increments p.syscalltick afterwards;
and we wait for the increment before emitting traceGoSysExit.
Note that the increment is done even if tracing is not enabled,
because tracing can be enabled in the middle of syscall. We don't want the wait to hang.
reflect_addReflectOff adds a pointer to the reflection offset lookup map.
func reflect_chancap(c *hchan) int func reflect_chanlen(c *hchan) int
reflect_gcbits returns the GC type info for x, for testing.
The result is the bitmap entries (0 or 1), one entry per byte.
func reflect_ifaceE2I(inter *interfacetype, e eface, dst *iface) func reflect_makechan(t *chantype, size int) *hchan func reflect_makemap(t *maptype, cap int) *hmap func reflect_mapclear(t *maptype, h *hmap) func reflect_mapdelete_faststr(t *maptype, h *hmap, key string) func reflect_mapiterinit(t *maptype, h *hmap, it *hiter) func reflect_mapiternext(it *hiter) func reflect_maplen(h *hmap) int func reflect_memmove(to, from unsafe.Pointer, n uintptr)
reflect_resolveNameOff resolves a name offset from a base pointer.
reflect_resolveTextOff resolves a function pointer offset from a base type.
reflect_resolveTypeOff resolves an *rtype offset from a base type.
func reflect_rselect(cases []runtimeSelect) (int, bool) func reflect_typedmemclr(typ *_type, ptr unsafe.Pointer) func reflect_typedmemmove(typ *_type, dst, src unsafe.Pointer) func reflect_typedslicecopy(elemType *_type, dst, src slice) int func reflect_typelinks() ([]unsafe.Pointer, [][]int32) func reflect_unsafe_New(typ *_type) unsafe.Pointer
reflect_verifyNotInHeapPtr reports whether converting the not-in-heap pointer into a unsafe.Pointer is ok.
reflectcall calls fn with arguments described by stackArgs, stackArgsSize,
frameSize, and regArgs.
Arguments passed on the stack and space for return values passed on the stack
must be laid out at the space pointed to by stackArgs (with total length
stackArgsSize) according to the ABI.
stackRetOffset must be some value <= stackArgsSize that indicates the
offset within stackArgs where the return value space begins.
frameSize is the total size of the argument frame at stackArgs and must
therefore be >= stackArgsSize. It must include additional space for spilling
register arguments for stack growth and preemption.
TODO(mknyszek): Once we don't need the additional spill space, remove frameSize,
since frameSize will be redundant with stackArgsSize.
Arguments passed in registers must be laid out in regArgs according to the ABI.
regArgs will hold any return values passed in registers after the call.
reflectcall copies stack arguments from stackArgs to the goroutine stack, and
then copies back stackArgsSize-stackRetOffset bytes back to the return space
in stackArgs once fn has completed. It also "unspills" argument registers from
regArgs before calling fn, and spills them back into regArgs immediately
following the call to fn. If there are results being returned on the stack,
the caller should pass the argument frame type as stackArgsType so that
reflectcall can execute appropriate write barriers during the copy.
reflectcall expects regArgs.ReturnIsPtr to be populated indicating which
registers on the return path will contain Go pointers. It will then store
these pointers in regArgs.Ptrs such that they are visible to the GC.
Package reflect passes a frame type. In package runtime, there is only
one call that copies results back, in callbackWrap in syscall_windows.go, and it
does NOT pass a frame type, meaning there are no write barriers invoked. See that
call site for justification.
Package reflect accesses this symbol through a linkname.
Arguments passed through to reflectcall do not escape. The type is used
only in a very limited callee of reflectcall, the stackArgs are copied, and
regArgs is only used in the reflectcall frame.
reflectcallmove is invoked by reflectcall to copy the return values
out of the stack and into the heap, invoking the necessary write
barriers. dst, src, and size describe the return value area to
copy. typ describes the entire frame (not just the return values).
typ may be nil, which indicates write barriers are not needed.
It must be nosplit and must only call nosplit functions because the
stack map of reflectcall is wrong.
func reflectlite_ifaceE2I(inter *interfacetype, e eface, dst *iface)
reflectlite_resolveNameOff resolves a name offset from a base pointer.
reflectlite_resolveTypeOff resolves an *rtype offset from a base type.
func reflectlite_typedmemmove(typ *_type, dst, src unsafe.Pointer)
This function may be called in nosplit context and thus must be nosplit.
Disassociate p and the current m.
func releaseSudog(s *sudog)
Removes the finalizer (if any) from the object p.
Removes the Special record of the given kind for the object p.
Returns the record if the record existed, nil otherwise.
The caller must FixAlloc_Free the result.
reparsedebugvars reparses the runtime's debug variables
because the environment variable has been changed to env.
resetForSleep is called after the goroutine is parked for timeSleep.
We can't call resettimer in timeSleep itself because if this is a short
sleep and there are many goroutines then the P can wind up running the
timer function, goroutineReady, before the goroutine has been parked.
resettimer resets the time when a timer should fire.
If used for an inactive timer, the timer will become active.
This should be called instead of addtimer if the timer value has been,
or may have been, used previously.
Reports whether the timer was modified before it was run.
resetTimer resets an inactive timer, adding it to the heap.
Reports whether the timer was modified before it was run.
restoreGsignalStack restores the gsignal stack to the value it had
before entering the signal handler.
resumeG undoes the effects of suspendG, allowing the suspended
goroutine to continue from its current safe-point.
Retpolines, used by -spectre=ret flag in cmd/asm, cmd/compile.
retryOnEAGAIN retries a function until it does not return EAGAIN.
It will use an increasing delay between calls, and retry up to 20 times.
The function argument is expected to return an errno value,
and retryOnEAGAIN will return any errno value other than EAGAIN.
If all retries return EAGAIN, then retryOnEAGAIN will return EAGAIN.
return0 is a stub used to return 0 from deferproc.
It is called at the very end of deferproc to signal
the calling Go function that it should not jump
to deferreturn.
in asm_*.s
round x up to a power of 2.
Returns size of the memory block that mallocgc will allocate if you ask for the size.
rt_sigaction is implemented in assembly.
func rtsigprocmask(how int32, new, old *sigset, size int32)
runExitHooks runs any registered exit hook functions (funcs
previously registered using runtime.addExitHook). Here 'exitCode'
is the status code being passed to os.Exit, or zero if the program
is terminating normally without calling os.Exit.
This is the goroutine that runs all of the finalizers.
runGCProg returns the number of 1-bit entries written to memory.
runOneTimer runs a single timer.
The caller must have locked the timers for pp.
This will temporarily unlock the timers while running the timer function.
runOpenDeferFrame runs the active open-coded defers in the frame specified by
d. It normally processes all active defers in the frame, but stops immediately
if a defer does a successful recover. It returns true if there are no
remaining defers to run in the frame.
runPerThreadSyscall runs perThreadSyscall for this M if required.
This function throws if the system call returns with anything other than the
expected values.
runqdrain drains the local runnable queue of pp and returns all goroutines in it.
Executed only by the owner P.
runqempty reports whether pp has no Gs on its local run queue.
It never returns true spuriously.
Get g from local runnable queue.
If inheritTime is true, gp should inherit the remaining time in the
current time slice. Otherwise, it should start a new time slice.
Executed only by the owner P.
Grabs a batch of goroutines from pp's runnable queue into batch.
Batch is a ring buffer starting at batchHead.
Returns number of grabbed goroutines.
Can be executed by any P.
runqput tries to put g on the local runnable queue.
If next is false, runqput adds g to the tail of the runnable queue.
If next is true, runqput puts g in the pp.runnext slot.
If the run queue is full, runnext puts g on the global queue.
Executed only by the owner P.
runqputbatch tries to put all the G's on q on the local runnable queue.
If the queue is full, they are put on the global queue; in that case
this will temporarily acquire the scheduler lock.
Executed only by the owner P.
Put g and a batch of work from local runnable queue on global queue.
Executed only by the owner P.
Steal half of elements from local runnable queue of p2
and put onto local runnable queue of p.
Returns one of the stolen elements (or nil if failed).
runSafePointFn runs the safe point function, if any, for this P.
This should be called like
if getg().m.p.runSafePointFn != 0 {
runSafePointFn()
}
runSafePointFn must be checked on any transition in to _Pidle or
_Psyscall to avoid a race where forEachP sees that the P is running
just before the P goes into _Pidle/_Psyscall and neither forEachP
nor the P run the safe-point function.
runtime_expandFinalInlineFrame expands the final pc in stk to include all
"callers" if pc is inline.
runtime_FrameStartLine returns the start line of the function in a Frame.
runtime_FrameSymbolName returns the full symbol name of the function in a Frame.
For generic functions this differs from f.Function in that this doesn't replace
the shape name to "...".
func runtime_goroutineProfileWithLabels(p []StackRecord, labels []unsafe.Pointer) (n int, ok bool)
readProfile, provided to runtime/pprof, returns the next chunk of
binary CPU profiling stack trace data, blocking until data is available.
If profiling is turned off and all the profile data accumulated while it was
on has been returned, readProfile returns eof=true.
The caller must save the returned data and tags before calling readProfile again.
The returned data contains a whole number of records, and tags contains
exactly one entry per record.
func runtime_setProfLabel(labels unsafe.Pointer)
runtimer examines the first timer in timers. If it is ready based on now,
it runs the timer and removes or updates it.
Returns 0 if it ran a timer, -1 if there are no more timers, or the time
when the first timer should run.
The caller must have locked the timers for pp.
If a timer is run, this will temporarily unlock the timers.
save updates getg().sched to refer to pc and sp so that a following
gogo will restore pc and sp.
save must not have write barriers because invoking a write barrier
can clobber getg().sched.
saveAncestors copies previous ancestors of the given caller g and
includes info for the current caller into a new set of tracebacks for
a g being created.
func saveblockevent(cycles, rate int64, skip int, which bucketType) func saveg(pc, sp uintptr, gp *g, r *StackRecord)
scanblock scans b as scanobject would, but using an explicit
pointer bitmap instead of the heap bitmap.
This is used to scan non-heap roots, so it does not update
gcw.bytesMarked or gcw.heapScanWork.
If stk != nil, possible stack pointers are also reported to stk.putPtr.
scanConservative scans block [b, b+n) conservatively, treating any
pointer-like value in the block as a pointer.
If ptrmask != nil, only words that are marked in ptrmask are
considered as potential pointers.
If state != nil, it's assumed that [b, b+n) is a block in the stack
and may contain pointers to stack objects.
Scan a stack frame: local variables and function arguments/results.
scanobject scans the object starting at b, adding pointers to gcw.
b must point to the beginning of a heap object or an oblet.
scanobject consults the GC bitmap for the pointer mask and the
spans for the size of the object.
scanstack scans gp's stack, greying all pointers found on the stack.
Returns the amount of scan work performed, but doesn't update
gcController.stackScanWork or flush any credit. Any background credit produced
by this function should be flushed by its caller. scanstack itself can't
safely flush because it may result in trying to wake up a goroutine that
was just scanned, resulting in a self-deadlock.
scanstack will also shrink the stack if it is safe to do so. If it
is not, it schedules a stack shrink for the next synchronous safe
point.
scanstack is marked go:systemstack because it must not be preempted
while using a workbuf.
func sched_getaffinity(pid, len uintptr, buf *byte) int32
schedEnabled reports whether gp should be scheduled. It returns
false is scheduling of gp is disabled.
sched.lock must be held.
schedEnableUser enables or disables the scheduling of user
goroutines.
This does not stop already running user goroutines, so the caller
should first stop the world when disabling user goroutines.
The bootstrap sequence is:
call osinit
call schedinit
make & queue new G
call runtime·mstart
The new G calls runtime·main.
func schedtrace(detailed bool)
One round of scheduler: find a runnable goroutine and execute it.
Never returns.
selectgo implements the select statement.
cas0 points to an array of type [ncases]scase, and order0 points to
an array of type [2*ncases]uint16 where ncases must be <= 65536.
Both reside on the goroutine's stack (regardless of any escaping in
selectgo).
For race detector builds, pc0 points to an array of type
[ncases]uintptr (also on the stack); for other builds, it's set to
nil.
selectgo returns the index of the chosen scase, which matches the
ordinal position of its respective select{recv,send,default} call.
Also, if the chosen scase was a receive operation, it reports whether
a value was received.
compiler implements
select {
case v, ok = <-c:
... foo
default:
... bar
}
as
if selected, ok = selectnbrecv(&v, c); selected {
... foo
} else {
... bar
}
compiler implements
select {
case c <- v:
... foo
default:
... bar
}
as
if selectnbsend(c, v) {
... foo
} else {
... bar
}
func selectsetpc(pc *uintptr)
Called from runtime.
func semacquire1(addr *uint32, lifo bool, profile semaProfileFlags, skipframes int, reason waitReason) func semrelease(addr *uint32) func semrelease1(addr *uint32, handoff bool, skipframes int)
send processes a send operation on an empty channel c.
The value ep sent by the sender is copied to the receiver sg.
The receiver is then woken up to go on its merry way.
Channel c must be empty and locked. send unlocks c with unlockf.
sg must already be dequeued from c.
ep must be non-nil and point to the heap or the caller's stack.
setCheckmark throws if marking object is a checkmarks violation,
and otherwise sets obj's checkmark. It returns true if obj was
already checkmarked.
setcpuprofilerate sets the CPU profiling rate to hz times per second.
If hz <= 0, setcpuprofilerate turns off CPU profiling.
Update the C environment if cgo is loaded.
func setGCPercent(in int32) (out int32) func setGCPhase(x uint32)
setGNoWB performs *gp = new without a write barrier.
For times when it's impractical to use a guintptr.
setGsignalStack sets the gsignal stack of the current m to an
alternate signal stack returned from the sigaltstack system call.
It saves the old values in *old for use by restoreGsignalStack.
This is used when handling a signal if non-Go code has set the
alternate signal stack.
func setMaxStack(in int) (out int) func setMaxThreads(in int) (out int) func setMemoryLimit(in int64) (out int64)
setMNoWB performs *mp = new without a write barrier.
For times when it's impractical to use an muintptr.
func setPanicOnFault(new bool) (old bool)
setPinned marks or unmarks a Go pointer as pinned.
setProcessCPUProfilerTimer is called when the profiling timer changes.
It is called with prof.signalLock held. hz is the new timer, and is 0 if
profiling is being disabled. Enable or disable the signal as
required for -buildmode=c-archive.
Set the heap profile bucket associated with addr to b.
setSignalstackSP sets the ss_sp field of a stackt.
setsigsegv is used on darwin/arm64 to fake a segmentation fault.
This is exported via linkname to assembly in runtime/cgo.
setThreadCPUProfilerHz makes any thread-specific changes required to
implement profiling at a rate of hz.
No changes required on Unix systems when using setitimer.
Called from assembly only; declared for go vet.
func setTraceback(level string)
Shade the object if it isn't already.
The object is not nil and known to be in the heap.
Preemption must be disabled.
shouldPushSigpanic reports whether pc should be used as sigpanic's
return PC (pushing a frame for the call). Otherwise, it should be
left alone so that LR is used as sigpanic's return PC, effectively
replacing the top-most frame with sigpanic. This is used by
preparePanic.
showframe reports whether the frame with the given characteristics should
be printed during a traceback.
showfuncinfo reports whether a function with the given characteristics should
be printed during a traceback.
Maybe shrink the stack being used by gp.
gp must be stopped and we must own its stack. It may be in
_Grunning, but only if this is our own user G.
siftdownTimer puts the timer at position i in the right place
in the heap by moving it down toward the bottom of the heap.
siftupTimer puts the timer at position i in the right place
in the heap by moving it up toward the top of the heap.
It returns the smallest changed index.
func sigaction(sig uint32, new, old *sigactiont) func sigaltstack(new, old *stackt)
sigblock blocks signals in the current thread's signal mask.
This is used to block signals while setting up and tearing down g
when a non-Go thread calls a Go function. When a thread is exiting
we use the sigsetAllExiting value, otherwise the OS specific
definition of sigset_all is used.
This is nosplit and nowritebarrierrec because it is called by needm
which may be called on a non-Go thread with no g available.
sigdisable disables the Go signal handler for the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.disableSignal and signal_disable.
sigenable enables the Go signal handler to catch the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.enableSignal and signal_enable.
sigFetchG fetches the value of G safely when running in a signal handler.
On some architectures, the g value may be clobbered when running in a VDSO.
See issue #32912.
func sigfillset(mask *uint64)
Determines if the signal should be handled by Go and if not, forwards the
signal to the handler that was installed before Go's. Returns whether the
signal was forwarded.
This is called by the signal handler, and the world may be stopped.
sighandler is invoked when a signal occurs. The global g will be
set to a gsignal goroutine and we will be running on the alternate
signal stack. The parameter gp will be the value of the global g
when the signal occurred. The sig, info, and ctxt parameters are
from the system signal handler: they are the parameters passed when
the SA is passed to the sigaction system call.
The garbage collector may have stopped the world, so write barriers
are not allowed.
sigignore ignores the signal sig.
It is only called while holding the os/signal.handlers lock,
via os/signal.ignoreSignal and signal_ignore.
sigInitIgnored marks the signal as already ignored. This is called at
program start by initsig. In a shared library initsig is called by
libpreinit, so the runtime may not be initialized yet.
Must only be called from a single goroutine at a time.
Must only be called from a single goroutine at a time.
Must only be called from a single goroutine at a time.
Checked by signal handlers.
Called to receive the next queued signal.
Must only be called from a single goroutine at a time.
signalDuringFork is called if we receive a signal while doing a fork.
We do not want signals at that time, as a signal sent to the process
group may be delivered to the child process, causing confusion.
This should never be called, because we block signals across the fork;
this function is just a safety check. See issue 18600 for background.
signalM sends a signal to mp.
signalstack sets the current thread's alternate signal stack to s.
signalWaitUntilIdle waits until the signal delivery mechanism is idle.
This is used to ensure that we do not drop a signal notification due
to a race between disabling a signal and receiving a signal.
This assumes that signal delivery has already been disabled for
the signal(s) in question, and here we are just waiting to make sure
that all the signals have been delivered to the user channels
by the os/signal package.
This is called if we receive a signal when there is a signal stack
but we are not on it. This can only happen if non-Go code called
sigaction without setting the SS_ONSTACK flag.
sigpanic turns a synchronous signal into a run-time panic.
If the signal handler sees a synchronous panic, it arranges the
stack to look like the function where the signal occurred called
sigpanic, sets the signal's PC value to sigpanic, and returns from
the signal handler. The effect is that the program will act as
though the function that got the signal simply called sigpanic
instead.
This must NOT be nosplit because the linker doesn't know where
sigpanic calls can be injected.
The signal handler must not inject a call to sigpanic if
getg().throwsplit, since sigpanic may need to grow the stack.
This is exported via linkname to assembly in runtime/cgo.
Injected by the signal handler for panicking signals.
Initializes any registers that have fixed meaning at calls but
are scratch in bodies and calls sigpanic.
On many platforms it just jumps to sigpanic.
func sigprocmask(how int32, new, old *sigset)
Called if we receive a SIGPROF signal.
Called by the signal handler, may run during STW.
sigprofNonGo is called if we receive a SIGPROF signal on a non-Go thread,
and the signal handler collected a stack trace in sigprofCallers.
When this is called, sigprofCallersUse will be non-zero.
g is nil, and what we can do is very limited.
It is called from the signal handling functions written in assembly code that
are active for cgo programs, cgoSigtramp and sigprofNonGoWrapper, which have
not verified that the SIGPROF delivery corresponds to the best available
profiling source for this thread.
sigprofNonGoPC is called when a profiling signal arrived on a
non-Go thread and we have a single PC value, not a stack trace.
g is nil, and what we can do is very limited.
sigsave saves the current thread's signal mask into *p.
This is used to preserve the non-Go signal mask when a non-Go
thread calls a Go function.
This is nosplit and nowritebarrierrec because it is called by needm
which may be called on a non-Go thread with no g available.
sigsend delivers a signal from sighandler to the internal signal delivery queue.
It reports whether the signal was sent. If not, the caller typically crashes the program.
It runs from the signal handler, so it's limited in what it can do.
sigtrampgo is called from the signal handler function, sigtramp,
written in assembly code.
This is called by the signal handler, and the world may be stopped.
It must be nosplit because getg() is still the G that was running
(if any) when the signal was delivered, but it's (usually) called
on the gsignal stack. Until this switches the G to gsignal, the
stack bounds check won't work.
slicebytetostring converts a byte slice to a string.
It is inserted by the compiler into generated code.
ptr is a pointer to the first element of the slice;
n is the length of the slice.
Buf is a fixed-size buffer for the result,
it is not nil if the result does not escape.
slicebytetostringtmp returns a "string" referring to the actual []byte bytes.
Callers need to ensure that the returned string will not be used after
the calling goroutine modifies the original slice or synchronizes with
another goroutine.
The function is only called when instrumenting
and otherwise intrinsified by the compiler.
Some internal compiler optimizations use this function.
- Used for m[T1{... Tn{..., string(k), ...} ...}] and m[string(k)]
where k is []byte, T1 to Tn is a nesting of struct and array literals.
- Used for "<"+string(b)+">" concatenation where b is []byte.
- Used for string(b)=="foo" comparison where b is []byte.
slicecopy is used to copy from a string or slice of pointerless elements into a slice.
func slicerunetostring(buf *tmpBuf, a []rune) string
spanHasNoSpecials marks a span as having no specials in the arena bitmap.
spanHasSpecials marks a span as having specials in the arena bitmap.
spanOf returns the span of p. If p does not point into the heap
arena or no span has ever contained p, spanOf returns nil.
If p does not point to allocated memory, this may return a non-nil
span that does *not* contain p. If this is a possibility, the
caller should either call spanOfHeap or check the span bounds
explicitly.
Must be nosplit because it has callers that are nosplit.
spanOfHeap is like spanOf, but returns nil if p does not point to a
heap object.
Must be nosplit because it has callers that are nosplit.
spanOfUnchecked is equivalent to spanOf, but the caller must ensure
that p points into an allocated heap arena.
Must be nosplit because it has callers that are nosplit.
Used by reflectcall and the reflect package.
Spills/loads arguments in registers to/from an internal/abi.RegArgs
respectively. Does not follow the Go ABI.
stackalloc allocates an n byte stack.
stackalloc must run on the system stack because it uses per-P
resources and must not split the stack.
stackcacherefill/stackcacherelease implement a global pool of stack segments.
The pool is required to prevent unlimited growth of per-thread caches.
func stackcacherelease(c *mcache, order uint8)
stackcheck checks that SP is in range [g->stack.lo, g->stack.hi).
stackfree frees an n byte stack allocation at stk.
stackfree must run on the system stack because it uses per-P
resources and must not split the stack.
stacklog2 returns ⌊log_2(n)⌋.
func stackmapdata(stkmap *stackmap, n int32) bitvector
Allocates a stack from the free pool. Must be called with
stackpool[order].item.mu held.
Adds stack x to the free pool. Must be called with stackpool[order].item.mu held.
startCheckmarks prepares for the checkmarks phase.
The world must be stopped.
Schedules the locked m to run the locked gp.
May run during STW, so write barriers are not allowed.
Schedules some M to run the p (creates an M if necessary).
If p==nil, tries to get an idle P, if no idle P's does nothing.
May run with m.p==nil, so write barriers are not allowed.
If spinning is set, the caller has incremented nmspinning and must provide a
P. startm will set m.spinning in the newly started M.
Callers passing a non-nil P must call from a non-preemptible context. See
comment on acquirem below.
Argument lockheld indicates whether the caller already acquired the
scheduler lock. Callers holding the lock when making the call must pass
true. The lock might be temporarily dropped, but will be reacquired before
returning.
Must not have write barriers because this may be called without a P.
startpanic_m prepares for an unrecoverable panic.
It returns true if panic messages should be printed, or false if
the runtime is in bad shape and should just print stacks.
It must not have write barriers even though the write barrier
explicitly ignores writes once dying > 0. Write barriers still
assume that g.m.p != nil, and this function may not have P
in some contexts (e.g. a panic in a signal handler for a signal
sent to an M with no P).
the start PC of a goroutine for tracing purposes. If pc is a wrapper,
it returns the PC of the wrapped function. Otherwise it returns pc.
startTemplateThread starts the template thread if it is not already
running.
The calling thread must itself be in a known-good state.
startTheWorld undoes the effects of stopTheWorld.
startTheWorldGC undoes the effects of stopTheWorldGC.
startTimer adds t to the timer heap.
stealWork attempts to steal a runnable goroutine or timer from any P.
If newWork is true, new work may have been readied.
If now is not 0 it is the current time. stealWork returns the passed time or
the current time if now was passed as 0.
step advances to the next pc, value pair in the encoded table.
Return the bucket for stk[0:nstk], allocating new bucket if needed.
Stops execution of the current m that is locked to a g until the g is runnable again.
Returns with acquired P.
Stops execution of the current m until new work is available.
Returns with acquired P.
stopTheWorld stops all P's from executing goroutines, interrupting
all goroutines at GC safe points and records reason as the reason
for the stop. On return, only the current goroutine's P is running.
stopTheWorld must not be called from a system stack and the caller
must not hold worldsema. The caller must call startTheWorld when
other P's should resume execution.
stopTheWorld is safe for multiple goroutines to call at the
same time. Each will execute its own stop, and the stops will
be serialized.
This is also used by routines that do stack dumps. If the system is
in panic or being exited, this may not reliably stop all
goroutines.
stopTheWorldGC has the same effect as stopTheWorld, but blocks
until the GC is not running. It also blocks a GC from starting
until startTheWorldGC is called.
stopTheWorldWithSema is the core implementation of stopTheWorld.
The caller is responsible for acquiring worldsema and disabling
preemption first and then should stopTheWorldWithSema on the system
stack:
semacquire(&worldsema, 0)
m.preemptoff = "reason"
systemstack(stopTheWorldWithSema)
When finished, the caller must either call startTheWorld or undo
these three operations separately:
m.preemptoff = ""
systemstack(startTheWorldWithSema)
semrelease(&worldsema)
It is allowed to acquire worldsema once and then execute multiple
startTheWorldWithSema/stopTheWorldWithSema pairs.
Other P's are able to execute between successive calls to
startTheWorldWithSema and stopTheWorldWithSema.
Holding worldsema causes any other goroutines invoking
stopTheWorld to block.
stopTimer stops a timer.
It reports whether t was stopped before being run.
stringDataOnStack reports whether the string's data is
stored on the current goroutine's stack.
Testing adapters for hash quality tests (see hash_test.go)
func stringStructOf(sp *string) *stringStruct func stringtoslicebyte(buf *tmpBuf, s string) []byte func stringtoslicerune(buf *[32]rune, s string) []rune
subtract1 returns the byte pointer p-1.
nosplit because it is used during write barriers and must not be preempted.
subtractb returns the byte pointer p-n.
suspendG suspends goroutine gp at a safe-point and returns the
state of the suspended goroutine. The caller gets read access to
the goroutine until it calls resumeG.
It is safe for multiple callers to attempt to suspend the same
goroutine at the same time. The goroutine may execute between
subsequent successful suspend operations. The current
implementation grants exclusive access to the goroutine, and hence
multiple callers will serialize. However, the intent is to grant
shared read access, so please don't depend on exclusive access.
This must be called from the system stack and the user goroutine on
the current M (if any) must be in a preemptible state. This
prevents deadlocks where two goroutines attempt to suspend each
other and both are in non-preemptible states. There are other ways
to resolve this deadlock, but this seems simplest.
TODO(austin): What if we instead required this to be called from a
user goroutine? Then we could deschedule the goroutine while
waiting instead of blocking the thread. If two goroutines tried to
suspend each other, one of them would win and the other wouldn't
complete the suspend until it was resumed. We would have to be
careful that they couldn't actually queue up suspend for each other
and then both be suspended. This would also avoid the need for a
kernel context switch in the synchronous case because we could just
directly schedule the waiter. The context switch is unavoidable in
the signal case.
sweepone sweeps some unswept heap span and returns the number of pages returned
to the heap, or ^uintptr(0) if there was nothing to sweep.
func sync_atomic_CompareAndSwapUintptr(ptr *uintptr, old, new uintptr) bool func sync_atomic_StoreUintptr(ptr *uintptr, new uintptr) func sync_atomic_SwapUintptr(ptr *uintptr, new uintptr) uintptr func sync_fatal(s string)
Active spinning for sync.Mutex.
func sync_runtime_registerPoolCleanup(f func()) func sync_runtime_Semacquire(addr *uint32) func sync_runtime_SemacquireMutex(addr *uint32, lifo bool, skipframes int) func sync_runtime_SemacquireRWMutex(addr *uint32, lifo bool, skipframes int) func sync_runtime_SemacquireRWMutexR(addr *uint32, lifo bool, skipframes int) func sync_runtime_Semrelease(addr *uint32, handoff bool, skipframes int) func sync_throw(s string)
syncadjustsudogs adjusts gp's sudogs and copies the part of gp's
stack they refer to while synchronizing with concurrent channel
operations. It returns the number of bytes of stack copied.
sysAlloc transitions an OS-chosen region of memory from None to Ready.
More specifically, it obtains a large chunk of zeroed memory from the
operating system, typically on the order of a hundred kilobytes
or a megabyte. This memory is always immediately available for use.
sysStat must be non-nil.
Don't split the stack as this function may be invoked without a valid G,
which prevents us from allocating more stack.
Don't split the stack as this method may be invoked without a valid G, which
prevents us from allocating more stack.
wrapper for syscall package to call cgocall for libc (cgo) calls.
func syscall_Exit(code int)
Called from syscall package after Exec.
Called from syscall package after fork in parent.
Called from syscall package after fork in child.
It resets non-sigignored signals to the default handler, and
restores the signal mask in preparation for the exec.
Because this might be called during a vfork, and therefore may be
temporarily sharing address space with the parent process, this must
not change any global variables or calling into C code that may do so.
Called from syscall package before Exec.
Called from syscall package before fork.
syscall_runtime_doAllThreadsSyscall and executes a specified system call on
all Ms.
The system call is expected to succeed and return the same value on every
thread. If any threads do not match, the runtime throws.
func syscall_runtimeSetenv(key, value string)
sysFault transitions a memory region from Ready to Reserved. It
marks a region such that it will always fault if accessed. Used only for
debugging the runtime.
TODO(mknyszek): Currently it's true that all uses of sysFault transition
memory from Ready to Reserved, but this may not be true in the future
since on every platform the operation is much more general than that.
If a transition from Prepared is ever introduced, create a new function
that elides the Ready state accounting.
func sysFaultOS(v unsafe.Pointer, n uintptr)
sysFree transitions a memory region from any state to None. Therefore, it
returns memory unconditionally. It is used if an out-of-memory error has been
detected midway through an allocation or to carve out an aligned section of
the address space. It is okay if sysFree is a no-op only if sysReserve always
returns a memory region aligned to the heap allocator's alignment
restrictions.
sysStat must be non-nil.
Don't split the stack as this function may be invoked without a valid G,
which prevents us from allocating more stack.
Don't split the stack as this function may be invoked without a valid G,
which prevents us from allocating more stack.
sysHugePage does not transition memory regions, but instead provides a
hint to the OS that it would be more efficient to back this memory region
with pages of a larger size transparently.
sysHugePageCollapse attempts to immediately back the provided memory region
with huge pages. It is best-effort and may fail silently.
func sysHugePageOS(v unsafe.Pointer, n uintptr)
sysMap transitions a memory region from Reserved to Prepared. It ensures the
memory region can be efficiently transitioned to Ready.
sysStat must be non-nil.
sysMmap calls the mmap system call. It is implemented in assembly.
Always runs without a P, so write barriers are not allowed.
sysMunmap calls the munmap system call. It is implemented in assembly.
sysNoHugePage does not transition memory regions, but instead provides a
hint to the OS that it would be less efficient to back this memory region
with pages of a larger size transparently.
func sysNoHugePageOS(v unsafe.Pointer, n uintptr)
sysReserve transitions a memory region from None to Reserved. It reserves
address space in such a way that it would cause a fatal fault upon access
(either via permissions or not committing the memory). Such a reservation is
thus never backed by physical memory.
If the pointer passed to it is non-nil, the caller wants the
reservation there, but sysReserve can still choose another
location if that one is unavailable.
NOTE: sysReserve returns OS-aligned memory, but the heap allocator
may use larger alignment, so the caller must be careful to realign the
memory obtained by sysReserve.
sysReserveAligned is like sysReserve, but the returned pointer is
aligned to align bytes. It may reserve either n or n+align bytes,
so it returns the size that was reserved.
sysSigaction calls the rt_sigaction system call.
systemstack runs fn on a system stack.
If systemstack is called from the per-OS-thread (g0) stack, or
if systemstack is called from the signal handling (gsignal) stack,
systemstack calls fn directly and returns.
Otherwise, systemstack is being called from the limited stack
of an ordinary goroutine. In this case, systemstack switches
to the per-OS-thread stack, calls fn, and switches back.
It is common to use a func literal as the argument, in order
to share inputs and outputs with the code around the call
to system stack:
... set up y ...
systemstack(func() {
x = bigcall(y)
})
... use x ...
sysUnused transitions a memory region from Ready to Prepared. It notifies the
operating system that the physical pages backing this memory region are no
longer needed and can be reused for other purposes. The contents of a
sysUnused memory region are considered forfeit and the region must not be
accessed again until sysUsed is called.
func sysUnusedOS(v unsafe.Pointer, n uintptr)
sysUsed transitions a memory region from Prepared to Ready. It notifies the
operating system that the memory region is needed and ensures that the region
may be safely accessed. This is typically a no-op on systems that don't have
an explicit commit step and hard over-commit limits, but is critical on
Windows, for example.
This operation is idempotent for memory already in the Prepared state, so
it is safe to refer, with v and n, to a range of memory that includes both
Prepared and Ready memory. However, the caller must provide the exact amount
of Prepared memory for accounting purposes.
taggedPointerPack created a taggedPointer from a pointer and a tag.
Tag bits that don't fit in the result are discarded.
templateThread is a thread in a known-good state that exists solely
to start new threads in known-good states when the calling thread
may not be in a good state.
Many programs never need this, so templateThread is started lazily
when we first enter a state that might lead to running on a thread
in an unknown state.
templateThread runs on an M without a P, so it must not have write
barriers.
throw triggers a fatal error that dumps a stack trace and exits.
throw should be used for runtime-internal fatal errors where Go itself,
rather than user code, may be at fault for the failure.
Note: Called by runtime/pprof in addition to runtime code.
Poor mans 64-bit division.
This is a very special function, do not use it if you are not sure what you are doing.
int64 division is lowered into _divv() call on 386, which does not fit into nosplit functions.
Handles overflow in a time-specific manner.
This keeps us within no-split stack limits on 32-bit processors.
timeHistogramMetricsBuckets generates a slice of boundaries for
the timeHistogram. These boundaries are represented in seconds,
not nanoseconds like the timeHistogram represents durations.
func timer_delete(timerid int32) int32 func timer_settime(timerid int32, flags int32, new, old *itimerspec) int32
timeSleep puts the current goroutine to sleep for at least ns nanoseconds.
timeSleepUntil returns the time when the next timer should fire. Returns
maxWhen if there are no timers.
This is only called by sysmon and checkdead.
tooManyOverflowBuckets reports whether noverflow buckets is too many for a map with 1<<B buckets.
Note that most of these overflow buckets must be in sparse use;
if use was dense, then we'd have already triggered regular map growth.
tophash calculates the tophash value for hash.
func trace_userLog(id uint64, category, message string) func trace_userRegion(id, mode uint64, name string) func trace_userTaskCreate(id, parentID uint64, taskType string)
traceAcquireBuffer returns trace buffer to use and, if necessary, locks it.
func traceback1(pc, sp, lr uintptr, gp *g, flags unwindFlags)
traceback2 prints a stack trace starting at u. It skips the first "skip"
logical frames, after which it prints at most "max" logical frames. It
returns n, which is the number of logical frames skipped and printed, and
lastN, which is the number of logical frames skipped or printed just in the
physical frame that u references.
tracebackHexdump hexdumps part of stk around frame.sp and frame.fp
for debugging purposes. If the address bad is included in the
hexdumped range, it will mark it as well.
func tracebackothers(me *g)
tracebackPCs populates pcBuf with the return addresses for each frame from u
and returns the number of PCs written to pcBuf. The returned PCs correspond
to "logical frames" rather than "physical frames"; that is if A is inlined
into B, this will still return a PCs for both A and B. This also includes PCs
generated by the cgo unwinder, if one is registered.
If skip != 0, this skips this many logical frames.
Callers should set the unwindSilentErrors flag on u.
tracebacktrap is like traceback but expects that the PC and SP were obtained
from a trap, not from gp->sched or gp->syscallpc/gp->syscallsp or getcallerpc/getcallersp.
Because they are from a trap instead of from a saved pair,
the initial PC must not be rewound to the previous instruction.
(All the saved pairs record a PC that is a return address, so we
rewind it into the CALL instruction.)
If gp.m.libcall{g,pc,sp} information is available, it uses that information in preference to
the pc/sp/lr passed in.
traceClockNow returns a monotonic timestamp. The clock this function gets
the timestamp from is specific to tracing, and shouldn't be mixed with other
clock sources.
nosplit because it's called from exitsyscall, which is nosplit.
traceCPUSample writes a CPU profile sample stack to the execution tracer's
profiling buffer. It is called from a signal handler, so is limited in what
it can do.
traceEnabled returns true if the trace is currently enabled.
traceEvent writes a single event to trace buffer, flushing the buffer if necessary.
ev is event type.
If skip > 0, write current stack id as the last argument (skipping skip top frames).
If skip = 0, this event type should contain a stack, but we don't want
to collect and remember it for this particular call.
traceEventLocked writes a single event of type ev to the trace buffer bufp,
flushing the buffer if necessary. pid is the id of the current P, or
traceGlobProc if we're tracing without a real P.
Preemption is disabled, and if running without a real P the global tracing
buffer is locked.
Events types that do not include a stack set skip to -1. Event types that
include a stack may explicitly reference a stackID from the trace.stackTab
(obtained by an earlier call to traceStackID). Without an explicit stackID,
this function will automatically capture the stack of the goroutine currently
running on mp, skipping skip top frames or, if skip is 0, writing out an
empty stack record.
It records the event's args to the traceBuf, and also makes an effort to
reserve extraBytes bytes of additional space immediately following the event,
in the same traceBuf.
traceFlush puts buf onto stack of full buffers and returns an empty buffer.
This must run on the system stack because it acquires trace.lock.
tracefpunwindoff returns true if frame pointer unwinding for the tracer is
disabled via GODEBUG or not supported by the architecture.
TODO(#60254): support frame pointer unwinding on plan9/amd64.
traceFrameForPC records the frame information.
It may allocate memory.
traceFrames returns the frames corresponding to pcs. It may
allocate and may emit trace events.
traceFullDequeue dequeues from queue of full buffers.
traceFullQueue queues buf into queue of full buffers.
traceGCSweepSpan traces the sweep of a single page.
This may be called outside a traceGCSweepStart/traceGCSweepDone
pair; however, it will not emit any trace events in this case.
traceGCSweepStart prepares to trace a sweep loop. This does not
emit any events until traceGCSweepSpan is called.
traceGCSweepStart must be paired with traceGCSweepDone and there
must be no preemption points between these two calls.
func traceGoCreate(newg *g, pc uintptr) func traceGomaxprocs(procs int32) func traceGoPark(reason traceBlockReason, skip int) func traceGoSysBlock(pp *p) func traceGoUnpark(gp *g, skip int) func traceHeapAlloc(live uint64)
traceLockInit initializes global trace locks.
traceOneNewExtraM registers the fact that a new extra M was created with
the tracer. This matters if the M (which has an attached G) is used while
the trace is still active because if it is, we need the fact that it exists
to show up in the final trace.
traceProcFree frees trace buffer associated with pp.
This must run on the system stack because it acquires trace.lock.
func traceProcStop(pp *p)
traceReader returns the trace reader that should be woken up, if any.
Callers should first check that trace.enabled or trace.shutdown is set.
This must run on the system stack because it acquires trace.lock.
traceReaderAvailable returns the trace reader if it is not currently
scheduled and should be. Callers should first check that trace.enabled
or trace.shutdown is set.
traceReleaseBuffer releases a buffer previously acquired with traceAcquireBuffer.
traceShuttingDown returns true if the trace is currently shutting down.
traceStackID captures a stack trace into pcBuf, registers it in the trace
stack table, and returns its unique ID. pcBuf should have a length equal to
traceStackSize. skip controls the number of leaf frames to omit in order to
hide tracer internals from stack traces, see CL 5523.
traceString adds a string to the trace.strings and returns the id.
func traceSTWStart(reason stwReason)
trygetfull tries to get a full or partially empty workbuffer.
If one is not immediately available return nil.
tryRecordGoroutineProfile ensures that gp1 has the appropriate representation
in the current goroutine profile: either that it should not be profiled, or
that a snapshot of its call stack and labels are now in the profile.
tryRecordGoroutineProfileWB asserts that write barriers are allowed and calls
tryRecordGoroutineProfile.
typeBitsBulkBarrier executes a write barrier for every
pointer that would be copied from [src, src+size) to [dst,
dst+size) by a memmove using the type bitmap to locate those
pointer slots.
The type typ must correspond exactly to [src, src+size) and [dst, dst+size).
dst, src, and size must be pointer-aligned.
The type typ must have a plain bitmap, not a GC program.
The only use of this function is in channel sends, and the
64 kB channel element limit takes care of this for us.
Must not be preempted because it typically runs right before memmove,
and the GC must observe them as an atomic action.
Callers must perform cgo checks if goexperiment.CgoCheck2.
typedmemclr clears the typed memory at ptr with type typ. The
memory at ptr must already be initialized (and hence in type-safe
state). If the memory is being initialized for the first time, see
memclrNoHeapPointers.
If the caller knows that typ has pointers, it can alternatively
call memclrHasPointers.
TODO: A "go:nosplitrec" annotation would be perfect for this.
typedmemmove copies a value of type typ to dst from src.
Must be nosplit, see #16026.
TODO: Perfect for go:nosplitrec since we can't have a safe point
anywhere in the bulk barrier or memmove.
func typedslicecopy(typ *_type, dstPtr unsafe.Pointer, dstLen int, srcPtr unsafe.Pointer, srcLen int) int
typehash computes the hash of the object of type t at address p.
h is the seed.
This function is seldom used. Most maps use for hashing either
fixed functions (e.g. f32hash) or compiler-generated functions
(e.g. for a type like struct { x, y string }). This implementation
is slower but more general and is used for hashing interface types
(called from interhash or nilinterhash, above) or for hashing in
maps generated by reflect.MapOf (reflect_typehash, below).
Note: this function must match the compiler generated
functions exactly. See issue 37716.
typelinksinit scans the types from extra modules and builds the
moduledata typemap used to de-duplicate type pointers.
typesEqual reports whether two types are equal.
Everywhere in the runtime and reflect packages, it is assumed that
there is exactly one *_type per Go type, so that pointer equality
can be used to test if types are equal. There is one place that
breaks this assumption: buildmode=shared. In this case a type can
appear as two different pieces of memory. This is hidden from the
runtime and reflect package by the per-module typemap built in
typelinksinit. It uses typesEqual to map types from later modules
back into earlier ones.
Only typelinksinit needs this function.
unblocksig removes sig from the current thread's signal mask.
This is nosplit and nowritebarrierrec because it is called from
dieFromSignal, which can be called by sigfwdgo while running in the
signal handler, on the signal stack, with no g available.
func unlockextra(mp *m, delta int32) func unlockWithRank(l *mutex)
Called from dropm to undo the effect of an minit.
unminitSignals is called from dropm, via unminit, to undo the
effect of calling minit on a non-Go thread.
unpackScavChunkData unpacks a scavChunkData from a uint64.
The linker redirects a reference of a method that it determined
unreachable to a reference to this function, so it will throw if
ever called.
Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeSlice
Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeSlice
func unsafestring(ptr unsafe.Pointer, len int)
Keep this code in sync with cmd/compile/internal/walk/builtin.go:walkUnsafeString
func unsafestringcheckptr(ptr unsafe.Pointer, len64 int64)
Update the C environment if cgo is loaded.
updateTimer0When sets the P's timer0When field.
The caller must have locked the timers for pp.
updateTimerModifiedEarliest updates the recorded nextwhen field of the
earlier timerModifiedEarier value.
The timers for pp will not be locked.
updateTimerPMask clears pp's timer mask if it has no timers on its heap.
Ideally, the timer mask would be kept immediately consistent on any timer
operations. Unfortunately, updating a shared global data structure in the
timer hot path adds too much overhead in applications frequently switching
between no timers and some timers.
As a compromise, the timer mask is updated only on pidleget / pidleput. A
running P (returned by pidleget) may add a timer at any time, so its mask
must be set. An idle P (passed to pidleput) cannot add new timers while
idle, so if it has no timers at that time, its mask may be cleared.
Thus, we get the following effects on timer-stealing in findrunnable:
- Idle Ps with no timers when they go idle are never checked in findrunnable
(for work- or timer-stealing; this is the ideal case).
- Running Ps must always be checked.
- Idle Ps whose timers are stolen must continue to be checked until they run
again, even after timer expiration.
When the P starts running again, the mask should be set, as a timer may be
added at any time.
TODO(prattmic): Additional targeted updates may improve the above cases.
e.g., updating the mask when stealing a timer.
userArenaHeapBitsSetSliceType is the equivalent of heapBitsSetType but for
Go slice backing store values allocated in a user arena chunk. It sets up the
heap bitmap for n consecutive values with type typ allocated at address ptr.
userArenaHeapBitsSetType is the equivalent of heapBitsSetType but for
non-slice-backing-store Go values allocated in a user arena chunk. It
sets up the heap bitmap for the value with type typ allocated at address ptr.
base is the base address of the arena chunk.
usesLibcall indicates whether this runtime performs system calls
via libcall.
func usleep_no_g(usec uint32)
validSIGPROF compares this signal delivery's code against the signal sources
that the profiler uses, returning whether the delivery should be processed.
To be processed, a signal delivery from a known profiling mechanism should
correspond to the best profiling mechanism available to this thread. Signals
from other sources are always considered valid.
values for implementing maps.values
func vdsoFindVersion(info *vdsoInfo, ver *vdsoVersionKey) int32 func vdsoInitFromSysinfoEhdr(info *vdsoInfo, hdr *elfEhdr) func vdsoParseSymbols(info *vdsoInfo, version int32)
verifyTimerHeap verifies that the timer heap is in a valid state.
This is only for debugging, and is only called if verifyTimers is true.
The caller must have locked the timers.
wakeNetPoller wakes up the thread sleeping in the network poller if it isn't
going to wake up before the when argument; or it wakes an idle P to service
timers and the network poller if there isn't one already.
Tries to add one more P to execute G's.
Called when a G is made runnable (newproc, ready).
Must be called with a P.
wantAsyncPreempt returns whether an asynchronous preemption is
queued for gp.
wbBufFlush flushes the current P's write barrier buffer to the GC
workbufs.
This must not have write barriers because it is part of the write
barrier implementation.
This and everything it calls must be nosplit because 1) the stack
contains untyped slots from gcWriteBarrier and 2) there must not be
a GC safe point between the write barrier test in the caller and
flushing the buffer.
TODO: A "go:nosplitrec" annotation would be perfect for this.
wbBufFlush1 flushes p's write barrier buffer to the GC work queue.
This must not have write barriers because it is part of the write
barrier implementation, so this may lead to infinite loops or
buffer corruption.
This must be non-preemptible because it uses the P's workbuf.
wbMove performs the write barrier operations necessary before
copying a region of memory from src to dst of type typ.
Does not actually do the copying.
wbZero performs the write barrier operations necessary before
zeroing a region of memory at address dst of type typ.
Does not actually do the zeroing.
wirep is the first step of acquirep, which actually associates the
current M to pp. This is broken out so we can disallow write
barriers for this part, since we don't yet have a P.
write must be nosplit on Windows (see write1)
write1 calls the write system call.
It returns a non-negative number of bytes written or a negative errno value.
writeErrStr writes a string to descriptor 2.
func writeHeapBitsForAddr(addr uintptr) (h writeHeapBits) func writeheapdump_m(fd uintptr, m *MemStats)
Package-Level Variables (total 285, in which 1 is exported)
MemProfileRate controls the fraction of memory allocations
that are recorded and reported in the memory profile.
The profiler aims to sample an average of
one allocation per MemProfileRate bytes allocated.
To include every allocated block in the profile, set MemProfileRate to 1.
To turn off profiling entirely, set MemProfileRate to 0.
The tools that process the memory profiles assume that the
profile rate is constant across the lifetime of the program
and equal to the current value. Programs that change the
memory profiling rate should do so just once, as early as
possible in the execution of the program (for example,
at the beginning of main).
_cgo_mmap is filled in by runtime/cgo when it is linked into the
program, so it is only non-nil when using cgo.
_cgo_munmap is filled in by runtime/cgo when it is linked into the
program, so it is only non-nil when using cgo.
var _cgo_setenv unsafe.Pointer // pointer to C function
_cgo_sigaction is filled in by runtime/cgo when it is linked into the
program, so it is only non-nil when using cgo.
var _cgo_unsetenv unsafe.Pointer // pointer to C function var addrspace_vec [1]byte
used in asm_{386,amd64,arm64}.s to seed the hash function
agg is used by readMetrics, and is protected by metricsSema.
Managed as a global variable because its pointer will be
an argument to a dynamically-defined function, and we'd
like to avoid it escaping to the heap.
allDloggers is a list of all dloggers, linked through
dlogger.allLink. This is accessed atomically. This is prepend only,
so it doesn't need to protect against ABA races.
allglen and allgptr are atomic variables that contain len(allgs) and
&allgs[0] respectively. Proper ordering depends on totally-ordered
loads and stores. Writes are protected by allglock.
allgptr is updated before allglen. Readers should read allglen
before allgptr to ensure that allglen is always <= len(allgptr). New
Gs appended during the race can be missed. For a consistent view of
all Gs, allglock must be held.
allgptr copies should always be stored as a concrete type or
unsafe.Pointer, not uintptr, to ensure that GC can still reach it
even if it points to a stale array.
allgs contains all Gs ever created (including dead Gs), and thus
never shrinks.
Access via the slice is protected by allglock or stop-the-world.
Readers that cannot take the lock may (carefully!) use the atomic
variables below.
allocmLock is locked for read when creating new Ms in allocm and their
addition to allm. Thus acquiring this lock for write blocks the
creation of new Ms.
len(allp) == gomaxprocs; may change at safe points, otherwise
immutable.
allpLock protects P-less reads and size changes of allp, idlepMask,
and timerpMask, and all writes to allp.
asyncPreemptStack is the bytes of stack space required to inject an
asyncPreempt call.
auxv is populated on relevant platforms but defined here for all platforms
so x/sys/cpu can assume the getAuxv symbol exists without keeping its list
of auxv-using GOOS build tags in sync.
It contains an even number of elements, (tag, value) pairs.
var auxvreadbuf [128]uintptr var bbuckets atomic.UnsafePointer // *bucket, blocking profile buckets var blockprofilerate uint64 // in CPU ticks var boringCaches []unsafe.Pointer // for crypto/internal/boring
boundsErrorFmts provide error text for various out-of-bounds panics.
Note: if you change these strings, you should adjust the size of the buffer
in boundsError.Error below as well.
boundsNegErrorFmts are overriding formats if x is negative. In this case there's no need to report y.
var buckhash atomic.UnsafePointer // *buckhashArray
buildVersion is the Go tree's version string at build time.
If any GOEXPERIMENTs are set to non-default values, it will include
"X:<GOEXPERIMENT>".
This is set by the linker.
This is accessed by "go version <binary>".
casgstatusAlwaysTrack is a debug flag that causes casgstatus to always track
various latencies on every transition instead of sampling them.
cgoAlwaysFalse is a boolean value that is always false.
The cgo-generated code says if cgoAlwaysFalse { cgoUse(p) }.
The compiler cannot see that cgoAlwaysFalse is always false,
so it emits the test and keeps the call, giving the desired
escape analysis result. The test is cheaper than the call.
cgoHasExtraM is set on startup when an extra M is created for cgo.
The extra M must be created before any C/C++ code calls cgocallback.
When running with cgo, we call _cgo_thread_start
to start threads for us so that we can play nicely with
foreign code.
var class_to_size [68]uint16
covMeta is the top-level container for bits of state related to
code coverage meta-data in the runtime.
crashing is the number of m's we have waited for when implementing
GOTRACEBACK=crash when a signal is received.
Holds variables parsed from GODEBUG env var,
except for "memprofilerate" since there is an
existing int var for that value, which may
already have an initial value.
var debugPtrmask struct{lock mutex; data *byte} var defaultGOROOT string // set by cmd/link
disableMemoryProfiling is set by the linker if runtime.MemProfile
is not used and the link type guarantees nobody else could use it
elsewhere.
channels for synchronizing signal mask updates with the signal mask
thread
dummy mspan that contains no free objects.
channels for synchronizing signal mask updates with the signal mask
thread
execLock serializes exec and clone to avoid bugs or unspecified
behaviour around exec'ing while creating/destroying threads. See
issue #19546.
exitHooks stores state related to hook functions registered to
run when program execution terminates.
Locking linked list of extra M's, via mp.schedlink. Must be accessed
only via lockextra/unlockextra.
Can't be atomic.Pointer[m] because we use an invalid pointer as a
"locked" sentinel value. M's on this list remain visible to the GC
because their mp.curg is on allgs.
Number of extra M's in use by threads.
Number of M's in the extraM list.
Number of waiters in lockextra.
faketime is the simulated time in nanoseconds since 1970 for the
playground.
Zero means not to use faketime.
var fastlog2Table [33]float64 var finalizer1 [5]byte var finptrmask [64]byte var firstmoduledata moduledata // linker symbol
forcegcperiod is the maximum time in nanoseconds between garbage
collections. If we go this long without a garbage collection, one
is forced to run.
This is a variable for testing purposes. It normally doesn't change.
Bit vector of free marks.
Needs to be as big as the largest number of objects per span.
freezing is set to non-zero if the runtime is trying to freeze the
world.
Stores the signal handlers registered before Go installed its own.
These signal handlers will be invoked in cases where Go doesn't want to
handle a particular signal (e.g., signal occurred on a non-Go thread).
See sigfwdgo for more information on when the signals are forwarded.
This is read by the signal handler; accesses should use
atomic.Loaduintptr and atomic.Storeuintptr.
Total number of gcBgMarkWorker goroutines. Protected by worldsema.
Pool of GC parked background workers. Entries are type
*gcBgMarkWorkerNode.
var gcBitsArenas struct{lock mutex; free *gcBitsArena; next *gcBitsArena; current *gcBitsArena; previous *gcBitsArena}
gcBlackenEnabled is 1 if mutator assists and background mark
workers are allowed to blacken objects. This must only be set when
gcphase == _GCmark.
gcController implements the GC pacing controller that determines
when to trigger concurrent garbage collection and how much marking
work to do in mutator assists and background marking.
It calculates the ratio between the allocation rate (in terms of CPU
time) and the GC scan throughput to determine the heap size at which to
trigger a GC cycle such that no GC assists are required to finish on time.
This algorithm thus optimizes GC CPU utilization to the dedicated background
mark utilization of 25% of GOMAXPROCS by minimizing GC assists.
GOMAXPROCS. The high-level design of this algorithm is documented
at https://github.com/golang/proposal/blob/master/design/44167-gc-pacer-redesign.md.
See https://golang.org/s/go15gcpacing for additional historical context.
gcCPULimiter is a mechanism to limit GC CPU utilization in situations
where it might become excessive and inhibit application progress (e.g.
a death spiral).
The core of the limiter is a leaky bucket mechanism that fills with GC
CPU time and drains with mutator time. Because the bucket fills and
drains with time directly (i.e. without any weighting), this effectively
sets a very conservative limit of 50%. This limit could be enforced directly,
however, but the purpose of the bucket is to accommodate spikes in GC CPU
utilization without hurting throughput.
Note that the bucket in the leaky bucket mechanism can never go negative,
so the GC never gets credit for a lot of CPU time spent without the GC
running. This is intentional, as an application that stays idle for, say,
an entire day, could build up enough credit to fail to prevent a death
spiral the following day. The bucket's capacity is the GC's only leeway.
The capacity thus also sets the window the limiter considers. For example,
if the capacity of the bucket is 1 cpu-second, then the limiter will not
kick in until at least 1 full cpu-second in the last 2 cpu-second window
is spent on GC CPU time.
gcMarkDoneFlushed counts the number of P's with flushed work.
Ideally this would be a captured local in gcMarkDone, but forEachP
escapes its callback closure, so it can't capture anything.
This is protected by markDoneSema.
gcMarkWorkerModeStrings are the strings labels of gcMarkWorkerModes
to use in execution traces.
Garbage collector phase.
Indicates to write barrier and synchronization task to perform.
Holding gcsema grants the M the right to block a GC, and blocks
until the current GC is done. In particular, it prevents gomaxprocs
from changing concurrently.
TODO(mknyszek): Once gomaxprocs and the execution tracer can handle
being changed/enabled during a GC, remove this.
var globalAlloc struct{mutex; persistentAlloc} var godebugEnv atomic.Pointer[string] // set by parsedebugvars var godebugNewIncNonDefault atomic.Pointer[func(string) func()] var goroutineProfile struct{sema uint32; active bool; offset atomic.Int64; records []StackRecord; labels []Pointer} var gStatusStrings [10]string
handlingSig is indexed by signal number and is non-zero if we are
currently handling the signal. Or, to put it another way, whether
the signal handler is currently set to the Go signal handler or not.
This is uint32 rather than bool so that we can use atomic instructions.
used in hash{32,64}.go to seed the hash function
Bitmask of Ps in _Pidle list, one bit per P. Reads and writes must
be atomic. Length may change at safe points.
Each P must update only its own bit. In order to maintain
consistency, a P going idle must the idle mask simultaneously with
updates to the idle P list under the sched.lock, otherwise a racing
pidleget may clear the mask before pidleput sets the mask,
corrupting the bitmap.
N.B., procresize takes ownership of all Ps in stopTheWorldWithSema.
inForkedChild is true while manipulating signals in the child process.
This is used to avoid calling libc functions in case we are using vfork.
Value to use for signal mask for newly created M's.
inittrace stores statistics for init functions which are
updated by malloc and newproc when active is true.
intArgRegs is used by the various register assignment
algorithm implementations in the runtime. These include:.
- Finalizers (mfinal.go)
- Windows callbacks (syscall_windows.go)
Both are stripped-down versions of the algorithm since they
only have to deal with a subset of cases (finalizers only
take a pointer or interface argument, Go Windows callbacks
don't support floating point).
It should be modified with care and are generally only
modified when testing this package.
It should never be set higher than its internal/abi
constant counterparts, because the system relies on a
structure that is at least large enough to hold the
registers the system supports.
Protected by finlock.
Set by the linker so the runtime can determine the buildmode.
iscgo is set to true by the runtime/cgo package
Set by the linker so the runtime can determine the buildmode.
var itabTable *itabTableType // pointer to current table var itabTableInit itabTableType // starter table var lastmoduledatap *moduledata // linker symbol
levelBits is the number of bits in the radix for a given level in the super summary
structure.
The sum of all the entries of levelBits should equal heapAddrBits.
levelLogPages is log2 the maximum number of runtime pages in the address space
a summary in the given level represents.
The leaf level always represents exactly log2 of 1 chunk's worth of pages.
levelShift is the number of bits to shift to acquire the radix for a given level
in the super summary structure.
With levelShift, one can compute the index of the summary at level l related to a
pointer p by doing:
p >> levelShift[l]
lockNames gives the names associated with each of the above ranks.
lockPartialOrder is the transitive closure of the lock rank graph.
An entry for rank X lists all of the ranks that can already be held
when rank X is acquired.
Lock ranks that allow self-cycles list themselves.
main_init_done is a signal used by cgocallbackg that initialization
has been completed. It is made before _cgo_notify_runtime_init_done,
so all cgo calls can rely on it existing. When main_init is complete,
it is closed, meaning cgocallbackg can reliably receive from it.
mainStarted indicates that the main M has started.
channels for synchronizing signal mask updates with the signal mask
thread
maxOffAddr is the maximum address in the offset address
space. It corresponds to the highest virtual address representable
by the page alloc chunk and heap arena maps.
var maxstacksize uintptr // enough until runtime.main sets it for real var mbuckets atomic.UnsafePointer // *bucket, memory profile buckets var methodValueCallFrameObjs [1]stackObjectRecord // initialized in stackobjectinit var metrics map[string]metricData
metrics is a map of runtime/metrics keys to data used by the runtime
to sample each metric's value. metricsInit indicates it has been
initialized.
These fields are protected by metricsSema which should be
locked/unlocked with metricsLock() / metricsUnlock().
var minhexdigits int // protected by printlock
minOffAddr is the minimum address in the offset space, and
it corresponds to the virtual address arenaBaseOffset.
set using cmd/go/internal/modload.ModInfoProg
var modulesSlice *[]*moduledata // see activeModules
mSpanStateNames are the names of the span states, indexed by
mSpanState.
var mutexprofilerate uint64 // fraction sampled
needSysmonWorkaround is true if the workaround for
golang.org/issue/42515 is needed on NetBSD.
var netpollBreakRd uintptr // for netpollBreak var netpollBreakWr uintptr // for netpollBreak var netpollWakeSig atomic.Uint32 // used to avoid duplicate calls of netpollBreak
newmHandoff contains a list of m structures that need new OS threads.
This is used by newm in situations where newm itself can't safely
start an OS thread.
ptrmask for an allocation containing a single pointer.
var overflowTag [1]unsafe.Pointer // always nil
panicking is non-zero when crashing the program for an unrecovered panic.
paniclk is held while printing the panic information and stack trace,
so that two concurrent panics don't overlap their output.
pendingPreemptSignals is the number of preemption signals
that have been sent but not received. This is only used on Darwin.
For #41702.
persistentChunks is a list of all the persistent chunks we have
allocated. The list is maintained through the first word in the
persistent chunk. This is updated atomically.
perThreadSyscall is the system call to execute for the ongoing
doAllThreadsSyscall.
perThreadSyscall may only be written while mp.needPerThreadSyscall == 0 on
all Ms.
physHugePageSize is the size in bytes of the OS's default physical huge
page size whose allocation is opaque to the application. It is assumed
and verified to be a power of two.
If set, this must be set by the OS init code (typically in osinit) before
mallocinit. However, setting it at all is optional, and leaving the default
value is always safe (though potentially less efficient).
Since physHugePageSize is always assumed to be a power of two,
physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
The purpose of physHugePageShift is to avoid doing divisions in
performance critical functions.
physHugePageSize is the size in bytes of the OS's default physical huge
page size whose allocation is opaque to the application. It is assumed
and verified to be a power of two.
If set, this must be set by the OS init code (typically in osinit) before
mallocinit. However, setting it at all is optional, and leaving the default
value is always safe (though potentially less efficient).
Since physHugePageSize is always assumed to be a power of two,
physHugePageShift is defined as physHugePageSize == 1 << physHugePageShift.
The purpose of physHugePageShift is to avoid doing divisions in
performance critical functions.
physPageSize is the size in bytes of the OS's physical pages.
Mapping and unmapping operations must be done at multiples of
physPageSize.
This must be set by the OS init code (typically in osinit) before
mallocinit.
pinnedTypemaps are the map[typeOff]*_type from the moduledata objects.
These typemap objects are allocated at run time on the heap, but the
only direct reference to them is in the moduledata, created by the
linker and marked SNOPTRDATA so it is ignored by the GC.
To make sure the map isn't collected, we keep a second reference here.
to be able to test that the GC panics when a pinned pointer is leaking, this
panic function is a variable, that can be overwritten by a test.
var poolcleanup ()
printBacklog is a circular buffer of messages written with the builtin
print* functions, for use in postmortem analysis of core dumps.
Information about what cpu features are available.
Packages outside the runtime should not use these
as they are not an external api.
Set on startup in asm_{386,amd64}.s
profBlockLock protects the contents of every blockRecord struct
profInsertLock protects changes to the start of all *bucket linked lists
profMemActiveLock protects the active field of every memRecord struct
profMemFutureLock is a set of locks that protect the respective elements
of the future array of every memRecord struct
var racecgosync uint64 // represents possible synchronization in C code
reflectOffs holds type offsets defined at run time by the reflect package.
When a type is defined at run time, its *rtype data lives on the heap.
There are a wide range of possible addresses the heap may use, that
may not be representable as a 32-bit offset. Moreover the GC may
one day start moving heap memory, in which case there is no stable
offset that can be defined.
To provide stable offsets, we add pin *rtype objects in a global map
and treat the offset as an identifier. We use negative offsets that
do not overlap with any compile-time module offsets.
Entries are created by reflect.addReflectOff.
runningPanicDefers is non-zero while running deferred functions for panic.
This is used to try hard to get a panic stack trace out when exiting.
This slice records the initializing tasks that need to be
done to start up the runtime. It is built by the linker.
runtimeInitTime is the nanotime() at which the runtime started.
var scavenge struct{gcPercentGoal atomic.Uint64; memoryLimitGoal atomic.Uint64; assistTime atomic.Int64; backgroundTime atomic.Int64}
Sleep/wait state of the background scavenger.
secureMode holds the value of AT_SECURE passed in the auxiliary vector.
set_crosscall2 is set by the runtime/cgo package
sig handles communication between the signal handler and os/signal.
Other than the inuse and recv fields, the fields are accessed atomically.
The wanted and ignored fields are only written by one goroutine at
a time; access is controlled by the handlers Mutex in os/signal.
The fields are only read by that one goroutine and by the signal handler.
We access them atomically to minimize the race between setting them
in the goroutine calling os/signal and the signal handler,
which may be running in a different thread. That race is unavoidable,
as there is no connection between handling a signal and receiving one,
but atomic instructions should minimize it.
If the signal handler receives a SIGPROF signal on a non-Go thread,
it tries to collect a traceback into sigprofCallers.
sigprofCallersUse is set to non-zero while sigprofCallers holds a traceback.
sigsetAllExiting is used by sigblock(true) when a thread is
exiting. sigset_all is defined in OS specific code, and per GOOS
behavior may override this default for sigsetAllExiting: see
osinit().
var size_to_class128 [249]uint8 var size_to_class8 [129]uint8
spanSetBlockPool is a global pool of spanSetBlocks.
Global pool of large stack spans.
var stackPoisonCopy int // fill stack that should not be accessed with garbage, to detect bad dereferences during copy
Global pool of spans that have free stacks.
Stacks are assigned an order according to size.
order = log_2(size/FixedStack)
There is a free list for each order.
startingStackSize is the amount of stack that new goroutines start with.
It is a power of 2, and between _FixedStack and maxstacksize, inclusive.
startingStackSize is updated every GC by tracking the average size of
stacks scanned during the GC.
startupRandomData holds random bytes initialized at startup. These come from
the ELF AT_RANDOM auxiliary vector.
staticuint64s is used to avoid allocating in convTx for small integer values.
If you add to this list, also add it to src/internal/trace/parser.go.
If you change the values of any of the stw* constants, bump the trace
version number and make a copy of this.
TODO: These should be locals in testAtomic64, but we don't 8-byte
align stack variables on 386.
TODO: These should be locals in testAtomic64, but we don't 8-byte
align stack variables on 386.
testSigtrap and testSigusr1 are used by the runtime tests. If
non-nil, it is called on SIGTRAP/SIGUSR1. If it returns true, the
normal behavior on this signal is suppressed.
var testSigusr1 (gp *g) bool
Bitmask of Ps that may have a timer, one bit per P. Reads and writes
must be atomic. Length may change at safe points.
trace is global tracing context.
var typecache [256]typeCacheBucket
runtime variable to check if the processor we're running on
actually supports the instructions used by the AES-based
hash implementation.
If useCheckmark is true, marking of an object uses the checkmark
bits instead of the standard mark bits.
var userArenaState struct{lock mutex; reuse []liveUserArenaChunk; fault []liveUserArenaChunk}
Holding worldsema grants an M the right to try to stop the world.
The compiler knows about this variable.
If you change it, you must change builtin/runtime.go, too.
If you change the first four bytes, you must also change the write
barrier insertion code.
Set in runtime.cpuinit.
TODO: deprecate these; use internal/cpu directly.
var xbuckets atomic.UnsafePointer // *bucket, mutex profile buckets
base address for all 0-byte allocations
Package-Level Constants (total 820, in which 3 are exported)
Compiler is the name of the compiler toolchain that built the
running binary. Known toolchains are:
gc Also known as cmd/compile.
gccgo The gccgo front end, part of the GCC compiler suite.
GOARCH is the running program's architecture target:
one of 386, amd64, arm, s390x, and so on.
GOOS is the running program's operating system target:
one of darwin, freebsd, linux, and so on.
To view possible combinations of GOOS and GOARCH, run "go tool dist list".
_64bit = 1 on 64-bit systems, 0 on 32-bit systems
const _AT_HWCAP2 = 26 // hardware capability bit vector 2 const _AT_PAGESZ = 6 // System physical page size const _AT_RANDOM = 25 // introduced in 2.6.29 const _AT_SECURE = 23 // secure mode boolean const _AT_SYSINFO_EHDR = 33 const _BUS_ADRALN = 1 const _BUS_ADRERR = 2 const _BUS_OBJERR = 3
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
Clone, the Linux rfork.
const _ConcurrentSweep = true const _DT_GNU_HASH = 1879047925 // GNU-style dynamic symbol hash table const _DT_STRTAB = 5 // Address of string table const _DT_SYMTAB = 6 // Address of symbol table const _DT_VERDEF = 1879048188 const _DT_VERSYM = 1879048176 const _EI_NIDENT = 16
These values are the same on all known Unix systems.
If we find a discrepancy some day, we can split them out.
const _FD_CLOEXEC = 1 const _FinBlockSize = 4096 const _FixAllocChunk = 16384 // Chunk size for FixAlloc const _FPE_FLTDIV = 3 const _FPE_FLTINV = 7 const _FPE_FLTOVF = 4 const _FPE_FLTRES = 6 const _FPE_FLTSUB = 8 const _FPE_FLTUND = 5 const _FPE_INTDIV = 1 const _FPE_INTOVF = 2 const _FUTEX_PRIVATE_FLAG = 128 const _FUTEX_WAIT_PRIVATE = 128 const _FUTEX_WAKE_PRIVATE = 129 const _GCmarktermination = 2 // GC mark termination: allocate black, P's help GC, write barrier ENABLED
_Gcopystack means this goroutine's stack is being moved. It
is not executing user code and is not on a run queue. The
stack is owned by the goroutine that put it in _Gcopystack.
_Gdead means this goroutine is currently unused. It may be
just exited, on a free list, or just being initialized. It
is not executing user code. It may or may not have a stack
allocated. The G and its stack (if any) are owned by the M
that is exiting the G or that obtained the G from the free
list.
_Genqueue_unused is currently unused.
_Gidle means this goroutine was just allocated and has not
yet been initialized.
_Gmoribund_unused is currently unused, but hardcoded in gdb
scripts.
Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.
16 seems to provide enough amortization, but other than that it's mostly arbitrary number.
_Gpreempted means this goroutine stopped itself for a
suspendG preemption. It is like _Gwaiting, but nothing is
yet responsible for ready()ing it. Some suspendG must CAS
the status to _Gwaiting to take responsibility for
ready()ing this G.
_Grunnable means this goroutine is on a run queue. It is
not currently executing user code. The stack is not owned.
_Grunning means this goroutine may execute user code. The
stack is owned by this goroutine. It is not on a run queue.
It is assigned an M and a P (g.m and g.m.p are valid).
_Gscan combined with one of the above states other than
_Grunning indicates that GC is scanning the stack. The
goroutine is not executing user code and the stack is owned
by the goroutine that set the _Gscan bit.
_Gscanrunning is different: it is used to briefly block
state transitions while GC signals the G to scan its own
stack. This is otherwise like _Grunning.
atomicstatus&~Gscan gives the state the goroutine will
return to when the scan completes.
defined constants
defined constants
defined constants
defined constants
defined constants
_Gsyscall means this goroutine is executing a system call.
It is not executing user code. The stack is owned by this
goroutine. It is not on a run queue. It is assigned an M.
_Gwaiting means this goroutine is blocked in the runtime.
It is not executing user code. It is not on a run queue,
but should be recorded somewhere (e.g., a channel wait
queue) so it can be ready()d when necessary. The stack is
not owned *except* that a channel operation may read or
write parts of the stack under the appropriate channel
lock. Otherwise, it is not safe to access the stack after a
goroutine enters _Gwaiting (e.g., it may get moved).
const _ITIMER_PROF = 2 const _ITIMER_REAL = 0 const _ITIMER_VIRTUAL = 1
_KindSpecialPinCounter is a special used for objects that are pinned
multiple times
_KindSpecialReachable is a special used for tracking
reachability during testing.
const _MADV_COLLAPSE = 25 const _MADV_DONTNEED = 4 const _MADV_FREE = 8 const _MADV_HUGEPAGE = 14 const _MADV_NOHUGEPAGE = 15 const _MAP_FIXED = 16 const _MAP_PRIVATE = 2
Max number of threads to run garbage collection.
2, 3, and 4 are all plausible maximums depending
on the hardware details of the machine. The garbage
collector scales well to 32 cpus.
const _MaxSmallSize = 32768 const _NumSizeClasses = 68
Number of orders that get caching. Order 0 is FixedStack
and each successive order is twice as large.
We want to cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
will be allocated directly.
Since FixedStack is different on different systems, we
must vary NumStackOrders to keep the same maximum cached size.
OS | FixedStack | NumStackOrders
-----------------+------------+---------------
linux/darwin/bsd | 2KB | 4
windows/32 | 4KB | 3
windows/64 | 8KB | 2
plan9 | 4KB | 3
const _O_CLOEXEC = 524288 const _O_NONBLOCK = 2048 const _PageShift = 13
_Pdead means a P is no longer used (GOMAXPROCS shrank). We
reuse Ps if GOMAXPROCS increases. A dead P is mostly
stripped of its resources, though a few things remain
(e.g., trace buffers).
_Pgcstop means a P is halted for STW and owned by the M
that stopped the world. The M that stopped the world
continues to use its P, even in _Pgcstop. Transitioning
from _Prunning to _Pgcstop causes an M to release its P and
park.
The P retains its run queue and startTheWorld will restart
the scheduler on Ps with non-empty run queues.
_Pidle means a P is not being used to run user code or the
scheduler. Typically, it's on the idle P list and available
to the scheduler, but it may just be transitioning between
other states.
The P is owned by the idle list or by whatever is
transitioning its state. Its run queue is empty.
const _PROT_EXEC = 4 const _PROT_NONE = 0 const _PROT_READ = 1 const _PROT_WRITE = 2
_Prunning means a P is owned by an M and is being used to
run user code or the scheduler. Only the M that owns this P
is allowed to change the P's status from _Prunning. The M
may transition the P to _Pidle (if it has no more work to
do), _Psyscall (when entering a syscall), or _Pgcstop (to
halt for the GC). The M may also hand ownership of the P
off directly to another M (e.g., to schedule a locked G).
_Psyscall means a P is not running user code. It has
affinity to an M in a syscall but is not owned by it and
may be stolen by another M. This is similar to _Pidle but
uses lightweight transitions and maintains M affinity.
Leaving _Psyscall must be done with a CAS, either to steal
or retake the P. Note that there's an ABA hazard: even if
an M successfully CASes its original P back to _Prunning
after a syscall, it must understand the P may have been
used by another M in the interim.
const _PT_DYNAMIC = 2 // Dynamic linking information const _SA_ONSTACK = 134217728 const _SA_RESTART = 268435456 const _SA_RESTORER = 67108864 const _SA_SIGINFO = 4 const _SEGV_ACCERR = 2 const _SEGV_MAPERR = 1 const _SHN_UNDEF = 0 // Undefined section const _SHT_DYNSYM = 11 // Dynamic linker symbol table const _SI_KERNEL = 128 const _si_max_size = 128 const _SIG_BLOCK = 0 const _SIG_SETMASK = 2 const _SIG_UNBLOCK = 1
Values for the flags field of a sigTabT.
const _sigev_max_size = 64
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
const _SIGSTKFLT = 16
Values for the flags field of a sigTabT.
Values for the flags field of a sigTabT.
const _SIGVTALRM = 26 const _SOCK_DGRAM = 2 const _SS_DISABLE = 2
Per-P, per order stack segment cache size.
const _STB_GLOBAL = 1 // Global symbol const _STT_NOTYPE = 0 // Symbol type is not specified
Tiny allocator parameters, see "Tiny allocator" comment in malloc.go.
const _TinySizeClass int8 = 2 const _VER_FLG_BASE = 1 // Version definition of file itself const _WorkbufSize = 2048 // in bytes; larger values result in less contention const active_spin = 4 const active_spin_cnt = 30
addrBits is the number of bits needed to represent a virtual address.
See heapAddrBits for a table of address space sizes on
various architectures. 48 bits is enough for all
architectures except s390x.
On AMD64, virtual addresses are 48-bit (or 57-bit) numbers sign extended to 64.
We shift the address left 16 to eliminate the sign extended part and make
room in the bottom for the count.
On s390x, virtual addresses are 64-bit. There's not much we
can do about this, so we just hope that the kernel doesn't
get to really high addresses and panic if it does.
On AIX, 64-bit addresses are split into 36-bit segment number and 28-bit
offset in segment. Segment numbers in the range 0x0A0000000-0x0AFFFFFFF(LSA)
are available for mmap.
We assume all tagged addresses are from memory allocated with mmap.
We use one bit to distinguish between the two ranges.
const aixTagBits = 10
arenaBaseOffset is the pointer value that corresponds to
index 0 in the heap arena map.
On amd64, the address space is 48 bits, sign extended to 64
bits. This offset lets us handle "negative" addresses (or
high addresses if viewed as unsigned).
On aix/ppc64, this offset allows to keep the heapAddrBits to
48. Otherwise, it would be 60 in order to handle mmap addresses
(in range 0x0a00000000000000 - 0x0afffffffffffff). But in this
case, the memory reserved in (s *pageAlloc).init for chunks
is causing important slowdowns.
On other platforms, the user address space is contiguous
and starts at 0, so no offset is necessary.
A typed version of this constant that will make it into DWARF (for viewcore).
arenaBits is the total bits in a combined arena map index.
This is split between the index into the L1 arena map and
the L2 arena map.
arenaL1Bits is the number of bits of the arena number
covered by the first level arena map.
This number should be small, since the first level arena
map requires PtrSize*(1<<arenaL1Bits) of space in the
binary's BSS. It can be zero, in which case the first level
index is effectively unused. There is a performance benefit
to this, since the generated code can be more efficient,
but comes at the cost of having a large L2 mapping.
We use the L1 map on 64-bit Windows because the arena size
is small, but the address space is still 48 bits, and
there's a high cost to having a large L2.
arenaL1Shift is the number of bits to shift an arena frame
number by to compute an index into the first level arena map.
arenaL2Bits is the number of bits of the arena number
covered by the second level arena index.
The size of each arena map allocation is proportional to
1<<arenaL2Bits, so it's important that this not be too
large. 48 bits leads to 32MB arena index allocations, which
is about the practical threshold.
const asanenabled = false const boundsConvert boundsErrorCode = 8 // (*[x]T)(s), 0 <= x <= len(s) failed const boundsIndex boundsErrorCode = 0 // s[x], 0 <= x < len(s) failed const boundsSlice3Acap boundsErrorCode = 5 // s[?:?:x], 0 <= x <= cap(s) failed const boundsSlice3Alen boundsErrorCode = 4 // s[?:?:x], 0 <= x <= len(s) failed const boundsSlice3B boundsErrorCode = 6 // s[?:x:y], 0 <= x <= y failed (but boundsSlice3A didn't happen) const boundsSlice3C boundsErrorCode = 7 // s[x:y:?], 0 <= x <= y failed (but boundsSlice3A/B didn't happen) const boundsSliceAcap boundsErrorCode = 2 // s[?:x], 0 <= x <= cap(s) failed const boundsSliceAlen boundsErrorCode = 1 // s[?:x], 0 <= x <= len(s) failed const boundsSliceB boundsErrorCode = 3 // s[x:y], 0 <= x <= y failed (but boundsSliceA didn't happen)
Maximum number of key/elem pairs a bucket can hold.
size of bucket hash table
buffer of pending write data
const canCreateFile = true
capacityPerProc is the limiter's bucket capacity for each P in GOMAXPROCS.
const cgoCheckPointerFail = "cgo argument has Go pointer to unpinned Go pointer" const cgoResultFail = "cgo result has Go pointer" const cgoWriteBarrierFail = "unpinned Go pointer stored into non-Go memory"
clobberdeadPtr is a special value that is used by the compiler to
clobber dead stack slots, when -clobberdead flag is set.
Clone, the Linux rfork.
const concurrentSweep = true const cpuStatsDep statDep = 2 // corresponds to cpuStatsAggregate
data offset should be the size of the bmap struct, but needs to be
aligned correctly. For amd64p32 this means 64-bit alignment
even though pointers are 32 bit.
const debugCallRuntime = "call from within the Go runtime" const debugCallSystemStack = "executing on Go runtime stack" const debugCallUnknownFunc = "call from unknown function" const debugCallUnsafePoint = "call not at safe point"
check the BP links during traceback.
debugLogBytes is the size of each per-M ring buffer. This is
allocated off-heap to avoid blowing up the M and hence the GC'd
heap size.
debugLogHeaderSize is the number of bytes in the framing
header of every dlog record.
const debugLogHex = 6 const debugLogInt = 4 const debugLogPC = 11 const debugLogPtr = 7 const debugLogString = 8
debugLogStringLimit is the maximum number of bytes in a string.
Above this, the string will be truncated with "..(n more bytes).."
debugLogSyncSize is the number of bytes in a sync record.
const debugLogTraceback = 12 const debugLogUint = 5 const debugLogUnknown = 1
debugScanConservative enables debug logging for stack
frames that are scanned conservatively.
const debugSelect = false
defaultHeapMinimum is the value of heapMinimum for GOGC==100.
const dlogEnabled = false
drainCheckThreshold specifies how many units of work to do
between self-preemption checks in gcDrain. Assuming a scan
rate of 1 MB/ms, this is ~100 µs. Lower values have higher
overhead in the scan loop (the scheduler check may perform
a syscall, so its overhead is nontrivial). Higher values
make the system less responsive to incoming work.
Possible tophash values. We reserve a few possibilities for special marks.
Each bucket (including its overflow buckets, if any) will have either all or none of its
entries in the evacuated* states (except during the evacuate() method, which only happens
during map writes and thus no one else can observe the map during that time).
const evacuatedEmpty = 4 // cell is empty, bucket is evacuated. const evacuatedX = 2 // key/elem is valid. Entry has been evacuated to first half of larger table. const evacuatedY = 3 // same as above, but evacuated to second half of larger table.
These errors are reported (via writeErrStr) by some OS-specific
versions of newosproc and newosproc0.
These errors are reported (via writeErrStr) by some OS-specific
versions of newosproc and newosproc0.
const fastlogNumBits = 5 const fieldKindEface = 3 const fieldKindEol = 0 const fieldKindIface = 2 const fieldKindPtr = 1
finalizer goroutine status.
finalizer goroutine status.
finalizer goroutine status.
finalizer goroutine status.
finalizer goroutine status.
const fixedRootCount = 2 const fixedStack = 2048
The minimum stack size to allocate.
The hackery here rounds fixedStack0 up to a power of 2.
const fixedStack1 = 2047 const fixedStack2 = 2047 const fixedStack3 = 2047 const fixedStack4 = 2047 const fixedStack5 = 2047 const fixedStack6 = 2047
forcePreemptNS is the time slice given to a G before it is
preempted.
Must agree with internal/buildcfg.FramePointerEnabled.
const freeChunkSum pallocSum = 2251800887427584
Values for m.freeWait.
Values for m.freeWait.
Values for m.freeWait.
freezeStopWait is a large value that freezetheworld sets
sched.stopwait to in order to request that all Gs permanently stop.
gcAssistTimeSlack is the nanoseconds of mutator assist time that
can accumulate on a P before updating gcController.assistTime.
const gcBackgroundMode gcMode = 0 // concurrent GC and sweep
gcBackgroundUtilization is the fixed CPU utilization for background
marking. It must be <= gcGoalUtilization. The difference between
gcGoalUtilization and gcBackgroundUtilization will be made up by
mark assists. The scheduler will aim to use within 50% of this
goal.
As a general rule, there's little reason to set gcBackgroundUtilization
< gcGoalUtilization. One reason might be in mostly idle applications,
where goroutines are unlikely to assist at all, so the actual
utilization will be lower than the goal. But this is moot point
because the idle mark workers already soak up idle CPU resources.
These two values are still kept separate however because they are
distinct conceptually, and in previous iterations of the pacer the
distinction was more important.
const gcBitsChunkBytes uintptr = 65536
gcCPULimiterUpdatePeriod dictates the maximum amount of wall-clock time
we can go before updating the limiter.
gcCreditSlack is the amount of scan work credit that can
accumulate locally before updating gcController.heapScanWork and,
optionally, gcController.bgScanCredit. Lower values give a more
accurate assist ratio and make it more likely that assists will
successfully steal background credit. Higher values reduce memory
contention.
const gcForceBlockMode gcMode = 2 // stop-the-world GC now and STW sweep (forced by user) const gcForceMode gcMode = 1 // stop-the-world GC now, concurrent sweep
gcGoalUtilization is the goal CPU utilization for
marking as a fraction of GOMAXPROCS.
Increasing the goal utilization will shorten GC cycles as the GC
has more resources behind it, lessening costs from the write barrier,
but comes at the cost of increasing mutator latency.
gcMarkWorkerDedicatedMode indicates that the P of a mark
worker is dedicated to running that mark worker. The mark
worker should run without preemption.
gcMarkWorkerFractionalMode indicates that a P is currently
running the "fractional" mark worker. The fractional worker
is necessary when GOMAXPROCS*gcBackgroundUtilization is not
an integer and using only dedicated workers would result in
utilization too far from the target of gcBackgroundUtilization.
The fractional worker should run until it is preempted and
will be scheduled to pick up the fractional part of
GOMAXPROCS*gcBackgroundUtilization.
gcMarkWorkerIdleMode indicates that a P is running the mark
worker because it has nothing else to do. The idle worker
should run until it is preempted and account its time
against gcController.idleMarkTime.
gcMarkWorkerNotWorker indicates that the next scheduled G is not
starting work and the mode should be ignored.
gcOverAssistWork determines how many extra units of scan work a GC
assist does when an assist happens. This amortizes the cost of an
assist by pre-paying for this many bytes of future allocations.
const gcStatsDep statDep = 3 // corresponds to gcStatsAggregate
gcTriggerCycle indicates that a cycle should be started if
we have not yet started cycle number gcTrigger.n (relative
to work.cycles).
gcTriggerHeap indicates that a cycle should be started when
the heap size reaches the trigger heap size computed by the
controller.
gcTriggerTime indicates that a cycle should be started when
it's been more than forcegcperiod nanoseconds since the
previous GC cycle.
gTrackingPeriod is the number of transitions out of _Grunning between
latency tracking runs.
exported value for testing
const hashRandomBytes = 128 const hashWriting = 4 // a goroutine is writing to the map
heapAddrBits is the number of bits in a heap address. On
amd64, addresses are sign-extended beyond heapAddrBits. On
other arches, they are zero-extended.
On most 64-bit platforms, we limit this to 48 bits based on a
combination of hardware and OS limitations.
amd64 hardware limits addresses to 48 bits, sign-extended
to 64 bits. Addresses where the top 16 bits are not either
all 0 or all 1 are "non-canonical" and invalid. Because of
these "negative" addresses, we offset addresses by 1<<47
(arenaBaseOffset) on amd64 before computing indexes into
the heap arenas index. In 2017, amd64 hardware added
support for 57 bit addresses; however, currently only Linux
supports this extension and the kernel will never choose an
address above 1<<47 unless mmap is called with a hint
address above 1<<47 (which we never do).
arm64 hardware (as of ARMv8) limits user addresses to 48
bits, in the range [0, 1<<48).
ppc64, mips64, and s390x support arbitrary 64 bit addresses
in hardware. On Linux, Go leans on stricter OS limits. Based
on Linux's processor.h, the user address space is limited as
follows on 64-bit architectures:
Architecture Name Maximum Value (exclusive)
---------------------------------------------------------------------
amd64 TASK_SIZE_MAX 0x007ffffffff000 (47 bit addresses)
arm64 TASK_SIZE_64 0x01000000000000 (48 bit addresses)
ppc64{,le} TASK_SIZE_USER64 0x00400000000000 (46 bit addresses)
mips64{,le} TASK_SIZE64 0x00010000000000 (40 bit addresses)
s390x TASK_SIZE 1<<64 (64 bit addresses)
These limits may increase over time, but are currently at
most 48 bits except on s390x. On all architectures, Linux
starts placing mmap'd regions at addresses that are
significantly below 48 bits, so even if it's possible to
exceed Go's 48 bit limit, it's extremely unlikely in
practice.
On 32-bit platforms, we accept the full 32-bit address
space because doing so is cheap.
mips32 only has access to the low 2GB of virtual memory, so
we further limit it to 31 bits.
On ios/arm64, although 64-bit pointers are presumably
available, pointers are truncated to 33 bits in iOS <14.
Furthermore, only the top 4 GiB of the address space are
actually available to the application. In iOS >=14, more
of the address space is available, and the OS can now
provide addresses outside of those 33 bits. Pick 40 bits
as a reasonable balance between address space usage by the
page allocator, and flexibility for what mmap'd regions
we'll accept for the heap. We can't just move to the full
48 bits because this uses too much address space for older
iOS versions.
TODO(mknyszek): Once iOS <14 is deprecated, promote ios/arm64
to a 48-bit address space like every other arm64 platform.
WebAssembly currently has a limit of 4GB linear memory.
heapArenaBitmapWords is the size of each heap arena's bitmap in uintptrs.
heapArenaBytes is the size of a heap arena. The heap
consists of mappings of size heapArenaBytes, aligned to
heapArenaBytes. The initial heap mapping is one arena.
This is currently 64MB on 64-bit non-Windows and 4MB on
32-bit and on Windows. We use smaller arenas on Windows
because all committed memory is charged to the process,
even if it's not touched. Hence, for processes with small
heaps, the mapped arena space needs to be commensurate.
This is particularly important with the race detector,
since it significantly amplifies the cost of committed
memory.
const heapArenaWords = 8388608 const heapStatsDep statDep = 0 // corresponds to heapStatsAggregate const itabInitSize = 512
flags
const kindComplex128 = 16 const kindComplex64 = 15 const kindDirectIface = 32 const kindFloat32 = 13 const kindFloat64 = 14 const kindGCProg = 64 const kindInterface = 20 const kindString = 24 const kindStruct = 25 const kindUint16 = 9 const kindUint32 = 10 const kindUint64 = 11 const kindUintptr = 12 const kindUnsafePointer = 26 const largeSizeDiv = 128 const limiterEventIdle limiterEventType = 4 // Refers to time a P spent on the idle list. const limiterEventIdleMarkWork limiterEventType = 1 // Refers to an idle mark worker (see gcMarkWorkerMode). const limiterEventMarkAssist limiterEventType = 2 // Refers to mark assist (see gcAssistAlloc). const limiterEventNone limiterEventType = 0 // None of the following events. const limiterEventScavengeAssist limiterEventType = 3 // Refers to a scavenge assist (see allocSpan).
limiterEventTypeMask is a mask for the bits in p.limiterEventStart that represent
the event type. The rest of the bits of that field represent a timestamp.
limiterEventTypeMask is a mask for the bits in p.limiterEventStart that represent
the event type. The rest of the bits of that field represent a timestamp.
Maximum average load of a bucket that triggers growth is bucketCnt*13/16 (about 80% full)
Because of minimum alignment rules, bucketCnt is known to be at least 8.
Represent as loadFactorNum/loadFactorDen, to allow integer math.
const loadFactorNum = 12
The default lowest and highest continuation byte.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
MALLOC
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
MPROF
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
STACKGROW
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
lockRankLeafRank is the rank of lock that does not have a declared rank,
and hence is a leaf lock.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
TRACE
TRACEGLOBAL
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
Constants representing the ranks of all non-leaf runtime locks, in rank order.
Locks with lower rank must be taken before locks with higher rank,
in addition to satisfying the partial order in lockPartialOrder.
A few ranks allow self-cycles, which are specified in lockPartialOrder.
WB
logHeapArenaBytes is log_2 of heapArenaBytes. For clarity,
prefer using heapArenaBytes where possible (we need the
constant to compute some other constants).
logicalStackSentinel is a sentinel value at pcBuf[0] signifying that
pcBuf[1:] holds a logical stack requiring no further processing. Any other
value at pcBuf[0] represents a skip value to apply to the physical stack in
pcBuf[1:] after inline expansion.
const logMaxPackedValue = 21 const logPallocChunkBytes = 22
logScavChunkInUseMax is the number of bits needed to represent the number
of pages allocated in a single chunk. This is 1 more than log2 of the
number of pages in the chunk because we need to represent a fully-allocated
chunk.
const mantbits32 uint = 23 const mantbits64 uint = 52
maxAlloc is the maximum size of an allocation. On 64-bit,
it's theoretically possible to allocate 1<<heapAddrBits bytes. On
32-bit, however, this is one less than 1<<32 because the
number of bytes in the address space doesn't actually fit
in a uintptr.
const maxCPUProfStack = 64 const maxElemSize = 128
Maximum key or elem size to keep inline (instead of mallocing per element).
Must fit in a uint8.
Fast versions cannot handle big elems - the cutoff size for
fast versions in cmd/compile/internal/gc/walk.go must be at most this elem.
const maxObjsPerSpan = 1024
maxObletBytes is the maximum bytes of an object to scan at
once. Larger objects will be split up into "oblets" of at
most this size. Since we can scan 1–2 MB/ms, 128 KB bounds
scan preemption at ~100 µs.
This must be > _MaxSmallSize so that the object base is the
span base.
maxPackedValue is the maximum value that any of the three fields in
the pallocSum may take on.
maxPagesPerPhysPage is the maximum number of supported runtime pages per
physical page, based on maxPhysPageSize.
maxPhysHugePageSize sets an upper-bound on the maximum huge page size
that the runtime supports.
maxPhysPageSize is the maximum page size the runtime supports.
Numbers fundamental to the encoding.
const maxSmallSize = 32768
max depth of stack to record in bucket
maxStackScanSlack is the bytes of stack space allocated or freed
that can accumulate on a P before updating gcController.stackSize.
const maxTinySize = 16
The maximum trigger constant is chosen somewhat arbitrarily, but the
current constant has served us well over the years.
maxWhen is the maximum value for timer's when field.
const maxZero = 1024 // must match value in reflect/value.go:maxZero cmd/compile/internal/gc/walk.go:zeroValSize
memoryLimitHeapGoalHeadroomPercent is how headroom the memory-limit-based
heap goal should have as a percent of the maximum possible heap goal allowed
to maintain the memory limit.
memoryLimitMinHeapGoalHeadroom is the minimum amount of headroom the
pacer gives to the heap goal when operating in the memory-limited regime.
That is, it'll reduce the heap goal by this many extra bytes off of the
base calculation, at minimum.
profile types
These values must be kept identical to their corresponding Kind* values
in the runtime/metrics package.
minHeapForMetadataHugePages sets a threshold on when certain kinds of
heap metadata, currently the arenas map L2 entries and page alloc bitmap
mappings, are allowed to be backed by huge pages. If the heap goal ever
exceeds this threshold, then huge pages are enabled.
These numbers are chosen with the assumption that huge pages are on the
order of a few MiB in size.
The kind of metadata this applies to has a very low overhead when compared
to address space used, but their constant overheads for small heaps would
be very high if they were to be backed by huge pages (e.g. a few MiB makes
a huge difference for an 8 MiB heap, but barely any difference for a 1 GiB
heap). The benefit of huge pages is also not worth it for small heaps,
because only a very, very small part of the metadata is used for small heaps.
N.B. If the heap goal exceeds the threshold then shrinks to a very small size
again, then huge pages will still be enabled for this mapping. The reason is that
there's no point unless we're also returning the physical memory for these
metadata mappings back to the OS. That would be quite complex to do in general
as the heap is likely fragmented after a reduction in heap size.
minLegalPointer is the smallest possible legal pointer.
This is the smallest possible architectural page size,
since we assume that the first page is never mapped.
This should agree with minZeroPage in the compiler.
minPhysPageSize is a lower-bound on the physical page size. The
true physical page size may be larger than this. In contrast,
sys.PhysPageSize is an upper-bound on the physical page size.
Spend at least 1 ms scavenging, otherwise the corresponding
sleep time to maintain our desired utilization is too low to
be reliable.
minTagBits is the minimum number of tag bits that we expect.
const minTopHash = 5 // minimum tophash for a normal filled cell.
The minimum trigger constant was chosen empirically: given a sufficiently
fast/scalable allocator with 48 Ps that could drive the trigger ratio
to <0.05, this constant causes applications to retain the same peak
RSS compared to not having this allocator.
const mProfCycleWrap uint32 = 100663296 const msanenabled = false const mSpanDead mSpanState = 0 const mSpanInUse mSpanState = 1 // allocated for garbage collected heap const mSpanManual mSpanState = 2 // allocated for manual management (e.g., stack allocator) const mutex_locked = 1 const mutex_sleeping = 2 const mutex_unlocked = 0
sentinel bucket ID for iterator checks
const numSpanClasses = 136 const numStatsDeps statDep = 4 const numSweepClasses = 272
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
Offsets into internal/cpu records for use in assembly.
const oldIterator = 2 // there may be an iterator using oldbuckets
osRelaxMinNS is the number of nanoseconds of idleness to tolerate
without performing an osRelax. Since osRelax may reduce the
precision of timers, this should be enough larger than the relaxed
timer precision to keep the timer error acceptable.
Constants for testing.
const pageAlloc64Bit = 1 const pageCachePages uintptr = 64 const pagesPerArena = 8192
pagesPerReclaimerChunk indicates how many pages to scan from the
pageInUse bitmap at a time. Used by the page reclaimer.
Higher values reduce contention on scanning indexes (such as
h.reclaimIndex), but increase the minimum latency of the
operation.
The time required to scan this many pages can vary a lot depending
on how many spans are actually freed. Experimentally, it can
scan for pages at ~300 GB/ms on a 2.6GHz Core i7, but can only
free spans at ~32 MB/ms. Using 512 pages bounds this at
roughly 100µs.
Must be a multiple of the pageInUse bitmap element size and
must also evenly divide pagesPerArena.
pagesPerSpanRoot indicates how many pages to scan from a span root
at a time. Used by special root marking.
Higher values improve throughput by increasing locality, but
increase the minimum latency of a marking operation.
Must be a multiple of the pageInUse bitmap element size and
must also evenly divide pagesPerArena.
const pallocChunkBytes = 4194304
The size of a bitmap chunk, i.e. the amount of bits (that is, pages) to consider
in the bitmap at once.
Number of bits needed to represent all indices into the L1 of the
chunks map.
See (*pageAlloc).chunks for more details. Update the documentation
there should this number change.
const pallocChunksL1Shift = 13
pallocChunksL2Bits is the number of bits of the chunk index number
covered by the second level of the chunks map.
See (*pageAlloc).chunks for more details. Update the documentation
there should this change.
const passive_spin = 1 const pcbucketsize = 4096 // size of bucket in the pc->func lookup table
pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
goroutines respectively. The semaphore can be in the following states:
pdReady - io readiness notification is pending;
a goroutine consumes the notification by changing the state to pdNil.
pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to pdReady,
or, alternatively, concurrent timeout/close changes the state to pdNil.
G pointer - the goroutine is blocked on the semaphore;
io notification or timeout/close changes the state to pdReady or pdNil respectively
and unparks the goroutine.
pdNil - none of the above.
pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
goroutines respectively. The semaphore can be in the following states:
pdReady - io readiness notification is pending;
a goroutine consumes the notification by changing the state to pdNil.
pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to pdReady,
or, alternatively, concurrent timeout/close changes the state to pdNil.
G pointer - the goroutine is blocked on the semaphore;
io notification or timeout/close changes the state to pdReady or pdNil respectively
and unparks the goroutine.
pdNil - none of the above.
pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
goroutines respectively. The semaphore can be in the following states:
pdReady - io readiness notification is pending;
a goroutine consumes the notification by changing the state to pdNil.
pdWait - a goroutine prepares to park on the semaphore, but not yet parked;
the goroutine commits to park by changing the state to G pointer,
or, alternatively, concurrent io notification changes the state to pdReady,
or, alternatively, concurrent timeout/close changes the state to pdNil.
G pointer - the goroutine is blocked on the semaphore;
io notification or timeout/close changes the state to pdReady or pdNil respectively
and unparks the goroutine.
pdNil - none of the above.
persistentChunkSize is the number of bytes we allocate when we grow
a persistentAlloc.
physPageAlignedStacks indicates whether stack allocations must be
physical page aligned. This is a requirement for MAP_STACK on
OpenBSD.
const pinnerSize = 64 const pollBlockSize = 4096 const pollClosing = 1
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
const pollEventErr = 2 const pollFDSeqBits = 20 // number of bits in pollFDSeq const pollFDSeqMask = 1048575 // mask for pollFDSeq
Error codes returned by runtime_pollReset and runtime_pollWait.
These must match the values in internal/poll/fd_poll_runtime.go.
const preemptMSupported = true
profBufTagCount is the size of the CPU profile buffer's storage for the
goroutine tags associated with each sample. A capacity of 1<<14 means
room for 16k samples, or 160 thread-seconds at a 100 Hz sample rate.
profBufWordCount is the size of the CPU profile buffer's storage for the
header and stack of each sample, measured in 64-bit words. Every sample
has a required header of two words. With a small additional header (a
word or two) and stacks at the profiler's maximum length of 64 frames,
that capacity can support 1900 samples or 19 thread-seconds at a 100 Hz
sample rate, at a cost of 1 MiB.
const profReaderSleeping profIndex = 4294967296 // reader is sleeping and must be woken up const profWriteExtra profIndex = 8589934592 // overflow or eof waiting const raceenabled = false
To shake out latent assumptions about scheduling order,
we introduce some randomness into scheduling decisions
when running with the race detector.
The need for this was made obvious by changing the
(deterministic) scheduling order in Go 1.5 and breaking
many poorly-written tests.
With the randomness here, as long as the tests pass
consistently with -race, they shouldn't have latent scheduling
assumptions.
reduceExtraPercent represents the amount of memory under the limit
that the scavenger should target. For example, 5 means we target 95%
of the limit.
The purpose of shooting lower than the limit is to ensure that, once
close to the limit, the scavenger is working hard to maintain it. If
we have a memory limit set but are far away from it, there's no harm
in leaving up to 100-retainExtraPercent live, and it's more efficient
anyway, for the same reasons that retainExtraPercent exists.
retainExtraPercent represents the amount of memory over the heap goal
that the scavenger should keep as a buffer space for the allocator.
This constant is used when we do not have a memory limit set.
The purpose of maintaining this overhead is to have a greater pool of
unscavenged memory available for allocation (since using scavenged memory
incurs an additional cost), to account for heap fragmentation and
the ever-changing layout of the heap.
riscv64 SV57 mode gives 56 bits of userspace VA.
tagged pointer code supports it,
but broader support for SV57 mode is incomplete,
and there may be other issues (see #54104).
const riscv64TagBits = 11
rootBlockBytes is the number of bytes to scan per data or
BSS root.
Numbers fundamental to the encoding.
Numbers fundamental to the encoding.
const rwmutexMaxReaders = 1073741824 const sameSizeGrow = 8 // the current map growth is to a new map of the same size const scavChunkFlagsMask = 63
scavChunkHasFree indicates whether the chunk has anything left to
scavenge. This is the opposite of "empty," used elsewhere in this
file. The reason we say "HasFree" here is so the zero value is
correct for a newly-grown chunk. (New memory is scavenged.)
scavChunkHiOcFrac indicates the fraction of pages that need to be allocated
in the chunk in a single GC cycle for it to be considered high density.
const scavChunkHiOccPages uint16 = 496 const scavChunkInUseMask = 1023
scavChunkMaxFlags is the maximum number of flags we can have, given how
a scavChunkData is packed into 8 bytes.
scavChunkNoHugePage indicates whether this chunk has had any huge
pages broken by the scavenger.
.
The negative here is unfortunate, but necessary to make it so that
the zero value of scavChunkData accurately represents the state of
a newly-grown chunk. (New memory is marked as backed by huge pages.)
scavengeCostRatio is the approximate ratio between the costs of using previously
scavenged memory and scavenging memory.
For most systems the cost of scavenging greatly outweighs the costs
associated with using scavenged memory, making this constant 0. On other systems
(especially ones where "sysUsed" is not just a no-op) this cost is non-trivial.
This ratio is used as part of multiplicative factor to help the scavenger account
for the additional costs of using scavenged memory in its pacing.
The background scavenger is paced according to these parameters.
scavengePercent represents the portion of mutator time we're willing
to spend on scavenging in percent.
const selectDefault selectDir = 3 // default const selectRecv selectDir = 2 // case <-Chan: const selectSend selectDir = 1 // case Chan <- Send
Prime to not correlate with any user patterns.
sigPerThreadSyscall is the same signal (SIGSETXID) used by glibc for
per-thread syscalls on Linux. We use it for the same purpose in non-cgo
binaries.
sigPreempt is the signal used for non-cooperative preemption.
There's no good way to choose this signal, but there are some
heuristics:
1. It should be a signal that's passed-through by debuggers by
default. On Linux, this is SIGALRM, SIGURG, SIGCHLD, SIGIO,
SIGVTALRM, SIGPROF, and SIGWINCH, plus some glibc-internal signals.
2. It shouldn't be used internally by libc in mixed Go/C binaries
because libc may assume it's the only thing that can handle these
signals. For example SIGCANCEL or SIGSETXID.
3. It should be a signal that can happen spuriously without
consequences. For example, SIGALRM is a bad choice because the
signal handler can't tell if it was caused by the real process
alarm or not (arguably this means the signal is broken, but I
digress). SIGUSR1 and SIGUSR2 are also bad because those are often
used in meaningful ways by applications.
4. We need to deal with platforms without real-time signals (like
macOS), so those are out.
We use SIGURG because it meets all of these criteria, is extremely
unlikely to be used by an application for its "real" meaning (both
because out-of-band data is basically unused and because SIGURG
doesn't report which socket has the condition, making it pretty
useless), and even if it is, the application has to be ready for
spurious SIGURG. SIGIO wouldn't be a bad choice either, but is more
likely to be used for real.
const sigReceiving = 1 const sigSending = 2 const smallSizeDiv = 8 const smallSizeMax = 1024 const spanAllocHeap spanAllocType = 0 // heap span const spanAllocPtrScalarBits spanAllocType = 2 // unrolled GC prog bitmap span const spanAllocStack spanAllocType = 1 // stack span const spanAllocWorkBuf spanAllocType = 3 // work buf span const spanSetBlockEntries = 512 // 4KB on 64-bit const spanSetInitSpineCap = 256 // Enough for 1GB heap on 64-bit
stackDebug == 0: no logging
== 1: logging of per-stack operations
== 2: logging of per-frame operations
== 3: logging of per-word updates
== 4: logging of per-word reads
const stackFaultOnFree = 0 // old stacks are mapped noaccess to detect use after free
Force a stack movement. Used for debugging.
0xfffffeed in hex.
Thread is forking. Causes a split stack check failure.
0xfffffb2e in hex.
const stackFromSystem = 0 // allocate stacks from system memory instead of the heap
The stack guard is a pointer this many bytes above the
bottom of the stack.
The guard leaves enough room for a stackNosplit chain of NOSPLIT calls
plus one stackSmall frame plus stackSystem bytes for the OS.
This arithmetic must match that in cmd/internal/objabi/stack.go:StackLimit.
The minimum size of stack used by Go code
const stackNoCache = 0 // disable per-P small stack caches
stackNosplit is the maximum number of bytes that a chain of NOSPLIT
functions can use.
This arithmetic must match that in cmd/internal/objabi/stack.go:StackNosplit.
stackPoisonMin is the lowest allowed stack poison value.
Goroutine preemption request.
0xfffffade in hex.
stackSystem is a number of additional bytes to add
to each stack below the usual guard area for OS-specific
purposes like signal handling. Used on Windows, Plan 9,
and iOS because they do not use a separate stack.
const stackTraceDebug = false
It doesn't really matter what value we start at, but we can't be zero, because
that'll cause divide-by-zero issues. Pick something conservative which we'll
also use as a fallback.
const staticLockRanking = false
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
Reasons to stop-the-world.
Avoid reusing reasons and add new ones instead.
const summaryL0Bits = 14
The number of radix bits for each level.
The value of 3 is chosen such that the block of summaries we need to scan at
each level fits in 64 bytes (2^3 summaries * 8 bytes per summary), which is
close to the L1 cache line width on many systems. Also, a value of 3 fits 4 tree
levels perfectly into the 21-bit pallocBits summary field at the root level.
The following equation explains how each of the constants relate:
summaryL0Bits + (summaryLevels-1)*summaryLevelBits + logPallocChunkBytes = heapAddrBits
summaryLevels is an architecture-dependent value defined in mpagealloc_*.go.
The number of levels in the radix tree.
Code points in the surrogate range are not valid for UTF-8.
Code points in the surrogate range are not valid for UTF-8.
const sweepClassDone sweepClass = 4294967295 const sweepDrainedMask = 2147483648
sweepMinHeapDistance is a lower bound on the heap distance
(in bytes) reserved for concurrent sweeping between GC
cycles.
const sysStatsDep statDep = 1 // corresponds to sysStatsAggregate const tagAllocSample = 17
In addition to the 16 bits taken from the top, we can take 3 from the
bottom, because node must be pointer-aligned, giving a total of 19 bits
of count.
const tagFinalizer = 7
The number of bits stored in the numeric tag of a taggedPointer
const tagGoroutine = 4 const tagMemProf = 16 const tagMemStats = 10 const tagOSThread = 9 const tagOtherRoot = 2 const tagQueuedFinalizer = 11 const tagStackFrame = 5
testSmallBuf forces a small write barrier buffer to stress write
barrier flushing.
throwTypeNone means that we are not throwing.
throwTypeRuntime is a throw due to a problem with Go itself.
These throws include as much information as possible to aid in
debugging the runtime, including runtime frames, system goroutines,
and frame metadata.
throwTypeUser is a throw due to a problem with the application.
These throws do not include runtime frames, system goroutines, or
frame metadata.
const timeHistMaxBucketBits = 48 // Note that this is exclusive; 1 higher than the actual range.
For the time histogram type, we use an HDR histogram.
Values are placed in buckets based solely on the most
significant set bit. Thus, buckets are power-of-2 sized.
Values are then placed into sub-buckets based on the value of
the next timeHistSubBucketBits most significant bits. Thus,
sub-buckets are linear within a bucket.
Therefore, the number of sub-buckets (timeHistNumSubBuckets)
defines the error. This error may be computed as
1/timeHistNumSubBuckets*100%. For example, for 16 sub-buckets
per bucket the error is approximately 6%.
The number of buckets (timeHistNumBuckets), on the
other hand, defines the range. To avoid producing a large number
of buckets that are close together, especially for small numbers
(e.g. 1, 2, 3, 4, 5 ns) that aren't very useful, timeHistNumBuckets
is defined in terms of the least significant bit (timeHistMinBucketBits)
that needs to be set before we start bucketing and the most
significant bit (timeHistMaxBucketBits) that we bucket before we just
dump it into a catch-all bucket.
As an example, consider the configuration:
timeHistMinBucketBits = 9
timeHistMaxBucketBits = 48
timeHistSubBucketBits = 2
Then:
011000001
^--
│ ^
│ └---- Next 2 bits -> sub-bucket 3
└------- Bit 9 unset -> bucket 0
110000001
^--
│ ^
│ └---- Next 2 bits -> sub-bucket 2
└------- Bit 9 set -> bucket 1
1000000010
^-- ^
│ ^ └-- Lower bits ignored
│ └---- Next 2 bits -> sub-bucket 0
└------- Bit 10 set -> bucket 2
Following this pattern, bucket 38 will have the bit 46 set. We don't
have any buckets for higher values, so we spill the rest into an overflow
bucket containing values of 2^47-1 nanoseconds or approx. 1 day or more.
This range is more than enough to handle durations produced by the runtime.
const timeHistNumBuckets = 40
Two extra buckets, one for underflow, one for overflow.
The timer is deleted and should be removed.
It should not be run, but it is still in some P's heap.
The timer has been modified to an earlier time.
The new when value is in the nextwhen field.
The timer is in some P's heap, possibly in the wrong place.
The timer has been modified to the same or a later time.
The new when value is in the nextwhen field.
The timer is in some P's heap, possibly in the wrong place.
The timer is being modified.
The timer will only have this status briefly.
The timer has been modified and is being moved.
The timer will only have this status briefly.
Timer has no status set yet.
The timer has been stopped.
It is not in any P's heap.
The timer is being removed.
The timer will only have this status briefly.
Running the timer function.
A timer will only have this status briefly.
Waiting for timer to fire.
The timer is in some P's heap.
const tinySizeClass int8 = 2
tlsSlots is the number of pointer-sized slots reserved for TLS on some platforms,
like Windows.
The constant is known to the compiler.
There is no fundamental theory behind this number.
Shift of the number of arguments in the first event byte.
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
tracebackInnerFrames is the number of innermost frames to print in a
stack trace. The total maximum frames is tracebackInnerFrames +
tracebackOuterFrames.
tracebackOuterFrames is the number of outermost frames to print in a
stack trace.
Keep a cached value to make gotraceback fast,
since we call it on every call to gentraceback.
The cached value is a uint32 in which the low bits
are the "crash" and "all" settings and the remaining
bits are the traceback value (0 off, 1 on, 2 include system).
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
For maximal efficiency, just map the trace block reason directly to a trace
event.
Maximum number of bytes to encode uint64 in base-128.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Event types in the trace, args are given in square brackets.
Identifier of a fake P that is used when we trace without a real P.
Maximum number of PCs in a single stack trace.
Since events contain only stack id rather than whole stack trace,
we can allow quite large values here.
Timestamps in trace are cputicks/traceTickDiv.
This makes absolute values of timestamp diffs smaller,
and so they are encoded in less number of bytes.
64 on x86 is somewhat arbitrary (one tick is ~20ns on a 3GHz machine).
The suggested increment frequency for PowerPC's time base register is
512 MHz according to Power ISA v2.07 section 6.2, so we use 16 on ppc64
and ppc64le.
These constants determine the bounds on the GC trigger as a fraction
of heap bytes allocated between the start of a GC (heapLive == heapMarked)
and the end of a GC (heapLive == heapGoal).
The constants are obscured in this way for efficiency. The denominator
of the fraction is always a power-of-two for a quick division, so that
the numerator is a single constant integer multiplication.
Cache of types that have been serialized already.
We use a type's hash field to pick a bucket.
Inside a bucket, we keep a list of types that
have been serialized so far, most recently used first.
Note: when a bucket overflows we may end up
serializing a type more than once. That's ok.
Cache of types that have been serialized already.
We use a type's hash field to pick a bucket.
Inside a bucket, we keep a list of types that
have been serialized so far, most recently used first.
Note: when a bucket overflows we may end up
serializing a type more than once. That's ok.
const uintptrMask = 18446744073709551615
unwindJumpStack indicates that, if the traceback is on a system stack, it
should resume tracing at the user stack when the system stack is
exhausted.
unwindPrintErrors indicates that if unwinding encounters an error, it
should print a message and stop without throwing. This is used for things
like stack printing, where it's better to get incomplete information than
to crash. This is also used in situations where everything may not be
stopped nicely and the stack walk may not be able to complete, such as
during profiling signals or during a crash.
If neither unwindPrintErrors or unwindSilentErrors are set, unwinding
performs extra consistency checks and throws on any error.
Note that there are a small number of fatal situations that will throw
regardless of unwindPrintErrors or unwindSilentErrors.
unwindSilentErrors silently ignores errors during unwinding.
unwindTrap indicates that the initial PC and SP are from a trap, not a
return PC from a call.
The unwindTrap flag is updated during unwinding. If set, frame.pc is the
address of a faulting instruction instead of the return address of a
call. It also means the liveness at pc may not be known.
TODO: Distinguish frame.continpc, which is really the stack map PC, from
the actual continuation PC, which is computed differently depending on
this flag and a few other things.
const userArenaChunkBytes uintptr = 8388608 // min(userArenaChunkBytesMax, heapArenaBytes)
userArenaChunkBytes is the size of a user arena chunk.
userArenaChunkMaxAllocBytes is the maximum size of an object that can
be allocated from an arena. This number is chosen to cap worst-case
fragmentation of user arenas to 25%. Larger allocations are redirected
to the heap.
userArenaChunkPages is the number of pages a user arena chunk uses.
vdsoArrayMax is the byte-size of a maximally sized array on this architecture.
See cmd/compile/internal/amd64/galign.go arch.MAXWIDTH initialization.
vdsoBloomSizeScale is a scaling factor for gnuhash tables which are uint32 indexed,
but contain uintptrs
const vdsoDynSize uintptr = 70368744177663 const vdsoHashSize = 281474976710655 // uint32 const vdsoSymStringsSize = 1125899906842623 // byte
Maximum indices for the array types used when traversing the vDSO ELF structures.
Computed from architecture-specific max provided by vdso_linux_*.go
const vdsoVerSymSize = 562949953421311 // uint16
verifyTimers can be set to true to add debugging checks that the
timer heaps are valid.
const waitReasonChanReceive waitReason = 14 // "chan receive" const waitReasonChanReceiveNilChan waitReason = 3 // "chan receive (nil chan)" const waitReasonChanSend waitReason = 15 // "chan send" const waitReasonChanSendNilChan waitReason = 4 // "chan send (nil chan)" const waitReasonDebugCall waitReason = 29 // "debug call" const waitReasonDumpingHeap waitReason = 5 // "dumping heap" const waitReasonFinalizerWait waitReason = 16 // "finalizer wait" const waitReasonForceGCIdle waitReason = 17 // "force gc (idle)" const waitReasonGarbageCollection waitReason = 6 // "garbage collection" const waitReasonGarbageCollectionScan waitReason = 7 // "garbage collection scan" const waitReasonGCAssistMarking waitReason = 1 // "GC assist marking" const waitReasonGCAssistWait waitReason = 11 // "GC assist wait" const waitReasonGCMarkTermination waitReason = 30 // "GC mark termination" const waitReasonGCScavengeWait waitReason = 13 // "GC scavenge wait" const waitReasonGCSweepWait waitReason = 12 // "GC sweep wait" const waitReasonGCWorkerActive waitReason = 27 // "GC worker (active)" const waitReasonGCWorkerIdle waitReason = 26 // "GC worker (idle)" const waitReasonIOWait waitReason = 2 // "IO wait" const waitReasonPanicWait waitReason = 8 // "panicwait" const waitReasonPreempted waitReason = 28 // "preempted" const waitReasonSelect waitReason = 9 // "select" const waitReasonSelectNoCases waitReason = 10 // "select (no cases)" const waitReasonSemacquire waitReason = 18 // "semacquire" const waitReasonSleep waitReason = 19 // "sleep" const waitReasonStoppingTheWorld waitReason = 31 // "stopping the world" const waitReasonSyncCondWait waitReason = 20 // "sync.Cond.Wait" const waitReasonSyncMutexLock waitReason = 21 // "sync.Mutex.Lock" const waitReasonSyncRWMutexLock waitReason = 23 // "sync.RWMutex.Lock" const waitReasonSyncRWMutexRLock waitReason = 22 // "sync.RWMutex.RLock" const waitReasonTraceReaderBlocked waitReason = 24 // "trace reader (blocked)" const waitReasonWaitForGCCycle waitReason = 25 // "wait for GC cycle" const waitReasonZero waitReason = 0 // ""
wbBufEntries is the maximum number of pointers that can be
stored in the write barrier buffer.
This trades latency for throughput amortization. Higher
values amortize flushing overhead more, but increase the
latency of flushing. Higher values also increase the cache
footprint of the buffer.
TODO: What is the latency cost of this? Tune this value.
Maximum number of entries that we need to ask from the
buffer in a single call.
workbufAlloc is the number of bytes to allocate at a time
for new workbufs. This must be a multiple of pageSize and
should be a multiple of _WorkbufSize.
Larger values reduce workbuf allocation overhead. Smaller
values reduce heap fragmentation.
The pages are generated with Golds v0.6.7. (GOOS=linux GOARCH=amd64) Golds is a Go 101 project developed by Tapir Liu. PR and bug reports are welcome and can be submitted to the issue list. Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds. |