Involved Source Filescond.gomap.go Package sync provides basic synchronization primitives such as mutual
exclusion locks. Other than the Once and WaitGroup types, most are intended
for use by low-level library routines. Higher-level synchronization is
better done via channels and communication.
Values containing the types defined in this package should not be copied.once.gooncefunc.gopool.gopoolqueue.goruntime.goruntime2.gorwmutex.gowaitgroup.go
Code Examples
package main
import (
"fmt"
"sync"
)
func main() {
var once sync.Once
onceBody := func() {
fmt.Println("Only once")
}
done := make(chan bool)
for i := 0; i < 10; i++ {
go func() {
once.Do(onceBody)
done <- true
}()
}
for i := 0; i < 10; i++ {
<-done
}
}
package main
import (
"bytes"
"io"
"os"
"sync"
"time"
)
var bufPool = sync.Pool{
New: func() any {
// The Pool's New function should generally only return pointer
// types, since a pointer can be put into the return interface
// value without an allocation:
return new(bytes.Buffer)
},
}
// timeNow is a fake version of time.Now for tests.
func timeNow() time.Time {
return time.Unix(1136214245, 0)
}
func Log(w io.Writer, key, val string) {
b := bufPool.Get().(*bytes.Buffer)
b.Reset()
// Replace this with time.Now() in a real logger.
b.WriteString(timeNow().UTC().Format(time.RFC3339))
b.WriteByte(' ')
b.WriteString(key)
b.WriteByte('=')
b.WriteString(val)
w.Write(b.Bytes())
bufPool.Put(b)
}
func main() {
Log(os.Stdout, "path", "/search?q=flowers")
}
package main
import (
"sync"
)
type httpPkg struct{}
func (httpPkg) Get(url string) {}
var http httpPkg
func main() {
var wg sync.WaitGroup
var urls = []string{
"http://www.golang.org/",
"http://www.google.com/",
"http://www.example.com/",
}
for _, url := range urls {
// Increment the WaitGroup counter.
wg.Add(1)
// Launch a goroutine to fetch the URL.
go func(url string) {
// Decrement the counter when the goroutine completes.
defer wg.Done()
// Fetch the URL.
http.Get(url)
}(url)
}
// Wait for all HTTP fetches to complete.
wg.Wait()
}
Package-Level Type Names (total 21, in which 8 are exported)
/* sort exporteds by: | */
Cond implements a condition variable, a rendezvous point
for goroutines waiting for or announcing the occurrence
of an event.
Each Cond has an associated Locker L (often a *Mutex or *RWMutex),
which must be held when changing the condition and
when calling the Wait method.
A Cond must not be copied after first use.
In the terminology of the Go memory model, Cond arranges that
a call to Broadcast or Signal “synchronizes before” any Wait call
that it unblocks.
For many simple use cases, users will be better off using channels than a
Cond (Broadcast corresponds to closing a channel, and Signal corresponds to
sending on a channel).
For more on replacements for sync.Cond, see [Roberto Clapis's series on
advanced concurrency patterns], as well as [Bryan Mills's talk on concurrency
patterns].
[Roberto Clapis's series on advanced concurrency patterns]: https://blogtitle.github.io/categories/concurrency/
[Bryan Mills's talk on concurrency patterns]: https://drive.google.com/file/d/1nPdvhB0PutEJzdCq5ms6UI58dp50fcAN/view L is held while observing or changing the conditioncheckercopyCheckernoCopynoCopynotifynotifyList Broadcast wakes all goroutines waiting on c.
It is allowed but not required for the caller to hold c.L
during the call. Signal wakes one goroutine waiting on c, if there is any.
It is allowed but not required for the caller to hold c.L
during the call.
Signal() does not affect goroutine scheduling priority; if other goroutines
are attempting to lock c.L, they may be awoken before a "waiting" goroutine. Wait atomically unlocks c.L and suspends execution
of the calling goroutine. After later resuming execution,
Wait locks c.L before returning. Unlike in other systems,
Wait cannot return unless awoken by Broadcast or Signal.
Because c.L is not locked while Wait is waiting, the caller
typically cannot assume that the condition is true when
Wait returns. Instead, the caller should Wait in a loop:
c.L.Lock()
for !condition() {
c.Wait()
}
... make use of condition ...
c.L.Unlock()
func NewCond(l Locker) *Cond
Map is like a Go map[interface{}]interface{} but is safe for concurrent use
by multiple goroutines without additional locking or coordination.
Loads, stores, and deletes run in amortized constant time.
The Map type is specialized. Most code should use a plain Go map instead,
with separate locking or coordination, for better type safety and to make it
easier to maintain other invariants along with the map content.
The Map type is optimized for two common use cases: (1) when the entry for a given
key is only ever written once but read many times, as in caches that only grow,
or (2) when multiple goroutines read, write, and overwrite entries for disjoint
sets of keys. In these two cases, use of a Map may significantly reduce lock
contention compared to a Go map paired with a separate Mutex or RWMutex.
The zero Map is empty and ready for use. A Map must not be copied after first use.
In the terminology of the Go memory model, Map arranges that a write operation
“synchronizes before” any read operation that observes the effect of the write, where
read and write operations are defined as follows.
Load, LoadAndDelete, LoadOrStore, Swap, CompareAndSwap, and CompareAndDelete
are read operations; Delete, LoadAndDelete, Store, and Swap are write operations;
LoadOrStore is a write operation when it returns loaded set to false;
CompareAndSwap is a write operation when it returns swapped set to true;
and CompareAndDelete is a write operation when it returns deleted set to true. dirty contains the portion of the map's contents that require mu to be
held. To ensure that the dirty map can be promoted to the read map quickly,
it also includes all of the non-expunged entries in the read map.
Expunged entries are not stored in the dirty map. An expunged entry in the
clean map must be unexpunged and added to the dirty map before a new value
can be stored to it.
If the dirty map is nil, the next write to the map will initialize it by
making a shallow copy of the clean map, omitting stale entries. misses counts the number of loads since the read map was last updated that
needed to lock mu to determine whether the key was present.
Once enough misses have occurred to cover the cost of copying the dirty
map, the dirty map will be promoted to the read map (in the unamended
state) and the next store to the map will make a new dirty copy.muMutex read contains the portion of the map's contents that are safe for
concurrent access (with or without mu held).
The read field itself is always safe to load, but must only be stored with
mu held.
Entries stored in read may be updated concurrently without mu, but updating
a previously-expunged entry requires that the entry be copied to the dirty
map and unexpunged with mu held. CompareAndDelete deletes the entry for key if its value is equal to old.
The old value must be of a comparable type.
If there is no current value for key in the map, CompareAndDelete
returns false (even if the old value is the nil interface value). CompareAndSwap swaps the old and new values for key
if the value stored in the map is equal to old.
The old value must be of a comparable type. Delete deletes the value for a key. Load returns the value stored in the map for a key, or nil if no
value is present.
The ok result indicates whether value was found in the map. LoadAndDelete deletes the value for a key, returning the previous value if any.
The loaded result reports whether the key was present. LoadOrStore returns the existing value for the key if present.
Otherwise, it stores and returns the given value.
The loaded result is true if the value was loaded, false if stored. Range calls f sequentially for each key and value present in the map.
If f returns false, range stops the iteration.
Range does not necessarily correspond to any consistent snapshot of the Map's
contents: no key will be visited more than once, but if the value for any key
is stored or deleted concurrently (including by f), Range may reflect any
mapping for that key from any point during the Range call. Range does not
block other methods on the receiver; even f itself may call any method on m.
Range may be O(N) with the number of elements in the map even if f returns
false after a constant number of calls. Store sets the value for a key. Swap swaps the value for a key and returns the previous value if any.
The loaded result reports whether the key was present.(*Map) dirtyLocked()(*Map) loadReadOnly() readOnly(*Map) missLocked()
func mime.clearSyncMap(m *Map)
var encoding/binary.structSize
var encoding/json.encoderCache
var encoding/json.fieldCache
var internal/godebug.cache
var mime.extensions
var mime.mimeTypes
var mime.mimeTypesLower
var reflect.layoutCache
var reflect.lookupCache
var reflect.ptrMap
var runtime/cgo.handles
A Mutex is a mutual exclusion lock.
The zero value for a Mutex is an unlocked mutex.
A Mutex must not be copied after first use.
In the terminology of the Go memory model,
the n'th call to Unlock “synchronizes before” the m'th call to Lock
for any n < m.
A successful call to TryLock is equivalent to a call to Lock.
A failed call to TryLock does not establish any “synchronizes before”
relation at all.semauint32stateint32 Lock locks m.
If the lock is already in use, the calling goroutine
blocks until the mutex is available. TryLock tries to lock m and reports whether it succeeded.
Note that while correct uses of TryLock do exist, they are rare,
and use of TryLock is often a sign of a deeper problem
in a particular use of mutexes. Unlock unlocks m.
It is a run-time error if m is not locked on entry to Unlock.
A locked Mutex is not associated with a particular goroutine.
It is allowed for one goroutine to lock a Mutex and then
arrange for another goroutine to unlock it.(*Mutex) lockSlow()(*Mutex) unlockSlow(new int32)
*Mutex : Locker
var allPoolsMu
var crypto/tls.writerMutex
var image.formatsMu
var internal/godebug.updateMu
var internal/intern.mu
var mime.extensionsMu
var net/http.http2testHookOnPanicMu *sync.Mutex
var net/http.uniqNameMu
var reflect.funcTypesMutex
var syscall.forkingLock
Once is an object that will perform exactly one action.
A Once must not be copied after first use.
In the terminology of the Go memory model,
the return from f “synchronizes before”
the return from any call of once.Do(f). done indicates whether the action has been performed.
It is first in the struct because it is used in the hot path.
The hot path is inlined at every call site.
Placing done first allows more compact instructions on some architectures (amd64/386),
and fewer instructions (to calculate offset) on other architectures.mMutex Do calls the function f if and only if Do is being called for the
first time for this instance of Once. In other words, given
var once Once
if once.Do(f) is called multiple times, only the first call will invoke f,
even if f has a different value in each invocation. A new instance of
Once is required for each function to execute.
Do is intended for initialization that must be run exactly once. Since f
is niladic, it may be necessary to use a function literal to capture the
arguments to a function to be invoked by Do:
config.once.Do(func() { config.init(filename) })
Because no call to Do returns until the one call to f returns, if f causes
Do to be called, it will deadlock.
If f panics, Do considers it to have returned; future calls of Do return
without calling f.(*Once) doSlow(f func())
var compress/flate.fixedOnce
var crypto/des.feistelBoxOnce
var crypto/ecdsa.p224Once
var crypto/ecdsa.p256Once
var crypto/ecdsa.p384Once
var crypto/ecdsa.p521Once
var crypto/elliptic.initonce
var crypto/internal/nistec._p224BOnce
var crypto/internal/nistec._p384BOnce
var crypto/internal/nistec._p521BOnce
var crypto/internal/nistec.p224GeneratorTableOnce
var crypto/internal/nistec.p224GGOnce
var crypto/internal/nistec.p384GeneratorTableOnce
var crypto/internal/nistec.p521GeneratorTableOnce
var crypto/internal/randutil.closedChanOnce
var crypto/x509.once
var github.com/gotd/td/telegram.typesOnce
var github.com/klauspost/compress/flate.fixedOnce
var go.opentelemetry.io/otel/attribute.defaultEncoderOnce
var hash/crc32.castagnoliOnce
var hash/crc32.ieeeOnce
var internal/poll.kernelVersion53Once
var internal/poll.serverInit
var mime.once
var net.confOnce
var net.mptcpOnce
var net.onceReadProtocols
var net.onceReadServices
var net.threadOnce
var net/http.envProxyOnce
var net/http.http2commonBuildOnce
var net/textproto.commonHeaderOnce
var syscall.envOnce
var time.localOnce
var time.unnamedFixedZonesOnce
var time.zoneinfoOnce
var vendor/golang.org/x/net/http2/hpack.buildRootOnce
var vendor/golang.org/x/text/unicode/norm.recompMapOnce
A Pool is a set of temporary objects that may be individually saved and
retrieved.
Any item stored in the Pool may be removed automatically at any time without
notification. If the Pool holds the only reference when this happens, the
item might be deallocated.
A Pool is safe for use by multiple goroutines simultaneously.
Pool's purpose is to cache allocated but unused items for later reuse,
relieving pressure on the garbage collector. That is, it makes it easy to
build efficient, thread-safe free lists. However, it is not suitable for all
free lists.
An appropriate use of a Pool is to manage a group of temporary items
silently shared among and potentially reused by concurrent independent
clients of a package. Pool provides a way to amortize allocation overhead
across many clients.
An example of good use of a Pool is in the fmt package, which maintains a
dynamically-sized store of temporary output buffers. The store scales under
load (when many goroutines are actively printing) and shrinks when
quiescent.
On the other hand, a free list maintained as part of a short-lived object is
not a suitable use for a Pool, since the overhead does not amortize well in
that scenario. It is more efficient to have such objects implement their own
free list.
A Pool must not be copied after first use.
In the terminology of the Go memory model, a call to Put(x) “synchronizes before”
a call to Get returning that same value x.
Similarly, a call to New returning x “synchronizes before”
a call to Get returning that same value x. New optionally specifies a function to generate
a value when Get would otherwise return nil.
It may not be changed concurrently with calls to Get. // local fixed-size per-P pool, actual type is [P]poolLocal // size of the local arraynoCopynoCopy // local from previous cycle // size of victims array Get selects an arbitrary item from the Pool, removes it from the
Pool, and returns it to the caller.
Get may choose to ignore the pool and treat it as empty.
Callers should not assume any relation between values passed to Put and
the values returned by Get.
If Get would otherwise return nil and p.New is non-nil, Get returns
the result of calling p.New. Put adds x to the pool.(*Pool) getSlow(pid int) any pin pins the current goroutine to P, disables preemption and
returns poolLocal pool for the P and the P's id.
Caller must call runtime_procUnpin() when done with the pool.(*Pool) pinSlow() (*poolLocal, int)
func net/http.bufioWriterPool(size int) *Pool
func nhooyr.io/websocket.slidingWindowPool(n int) *Pool
func reflect.funcLayout(t *reflect.funcType, rcvr *abi.Type) (frametype *abi.Type, framePool *Pool, abid reflect.abiDesc)
var crypto/tls.outBufPool
var encoding/json.encodeStatePool
var encoding/json.scannerPool
var fmt.ppFree
var fmt.ssFree
var github.com/go-faster/jx.decPool *Pool
var github.com/go-faster/jx.encPool *Pool
var github.com/go-faster/jx.writerPool *Pool
var github.com/gotd/td/internal/crypto.sha256Pool *Pool
var github.com/gotd/td/internal/proto.gzipBufPool
var github.com/klauspost/compress/flate.bitWriterPool
var go.opentelemetry.io/otel/attribute.sortables
var go.uber.org/multierr._bufferPool
var internal/poll.splicePipePool
var io.blackHolePool
var log.bufferPool
var math/big.natPool
var net/http.bufioReaderPool
var net/http.bufioWriter2kPool
var net/http.bufioWriter4kPool
var net/http.copyBufPool
var net/http.headerSorterPool
var net/http.http2bufPool
var net/http.http2bufWriterPool
var net/http.http2errChanPool
var net/http.http2fhBytes
var net/http.http2littleBuf
var net/http.http2responseWriterStatePool
var net/http.http2sorterPool
var net/http.http2writeDataPool
var net/http.textprotoReaderPool
var nhooyr.io/websocket.bufioReaderPool
var nhooyr.io/websocket.bufioWriterPool
var nhooyr.io/websocket.flateReaderPool
var nhooyr.io/websocket.flateWriterPool
var os.dirBufPool
var regexp.bitStatePool
var regexp.onePassPool
var syscall.pageBufPool *Pool
var vendor/golang.org/x/net/http2/hpack.bufPool
A RWMutex is a reader/writer mutual exclusion lock.
The lock can be held by an arbitrary number of readers or a single writer.
The zero value for a RWMutex is an unlocked mutex.
A RWMutex must not be copied after first use.
If a goroutine holds a RWMutex for reading and another goroutine might
call Lock, no goroutine should expect to be able to acquire a read lock
until the initial read lock is released. In particular, this prohibits
recursive read locking. This is to ensure that the lock eventually becomes
available; a blocked Lock call excludes new readers from acquiring the
lock.
In the terminology of the Go memory model,
the n'th call to Unlock “synchronizes before” the m'th call to Lock
for any n < m, just as for Mutex.
For any call to RLock, there exists an n such that
the n'th call to Unlock “synchronizes before” that call to RLock,
and the corresponding call to RUnlock “synchronizes before”
the n+1'th call to Lock. // number of pending readers // semaphore for readers to wait for completing writers // number of departing readers // held if there are pending writers // semaphore for writers to wait for completing readers Lock locks rw for writing.
If the lock is already locked for reading or writing,
Lock blocks until the lock is available. RLock locks rw for reading.
It should not be used for recursive read locking; a blocked Lock
call excludes new readers from acquiring the lock. See the
documentation on the RWMutex type. RLocker returns a Locker interface that implements
the Lock and Unlock methods by calling rw.RLock and rw.RUnlock. RUnlock undoes a single RLock call;
it does not affect other simultaneous readers.
It is a run-time error if rw is not locked for reading
on entry to RUnlock. TryLock tries to lock rw for writing and reports whether it succeeded.
Note that while correct uses of TryLock do exist, they are rare,
and use of TryLock is often a sign of a deeper problem
in a particular use of mutexes. TryRLock tries to lock rw for reading and reports whether it succeeded.
Note that while correct uses of TryRLock do exist, they are rare,
and use of TryRLock is often a sign of a deeper problem
in a particular use of mutexes. Unlock unlocks rw for writing. It is a run-time error if rw is
not locked for writing on entry to Unlock.
As with Mutexes, a locked RWMutex is not associated with a particular
goroutine. One goroutine may RLock (Lock) a RWMutex and then
arrange for another goroutine to RUnlock (Unlock) it.(*RWMutex) rUnlockSlow(r int32)
*RWMutex : Locker
func syscall_hasWaitingReaders(rw *RWMutex) bool
func syscall.hasWaitingReaders(rw *RWMutex) bool
var syscall.ForkLock
var crypto/x509.systemRootsMu
var go.uber.org/zap._encoderMutex
var go.uber.org/zap._globalMu
var nhooyr.io/websocket.swPoolMu
var syscall.envLock
A WaitGroup waits for a collection of goroutines to finish.
The main goroutine calls Add to set the number of
goroutines to wait for. Then each of the goroutines
runs and calls Done when finished. At the same time,
Wait can be used to block until all goroutines have finished.
A WaitGroup must not be copied after first use.
In the terminology of the Go memory model, a call to Done
“synchronizes before” the return of any Wait call that it unblocks.noCopynoCopysemauint32 // high 32 bits are counter, low 32 bits are waiter count. Add adds delta, which may be negative, to the WaitGroup counter.
If the counter becomes zero, all goroutines blocked on Wait are released.
If the counter goes negative, Add panics.
Note that calls with a positive delta that occur when the counter is zero
must happen before a Wait. Calls with a negative delta, or calls with a
positive delta that start when the counter is greater than zero, may happen
at any time.
Typically this means the calls to Add should execute before the statement
creating the goroutine or other event to be waited for.
If a WaitGroup is reused to wait for several independent sets of events,
new Add calls must happen after all previous Wait calls have returned.
See the WaitGroup example. Done decrements the WaitGroup counter by one. Wait blocks until the WaitGroup counter is zero.
var net.dnsWaitGroup
copyChecker holds back pointer to itself to detect object copying.(*copyChecker) check()
dequeueNil is used in poolDequeue to represent interface{}(nil).
Since we use nil to represent empty slots, we need a sentinel value
to represent nil.
An entry is a slot in the map corresponding to a particular key. p points to the interface{} value stored for the entry.
If p == nil, the entry has been deleted, and either m.dirty == nil or
m.dirty[key] is e.
If p == expunged, the entry has been deleted, m.dirty != nil, and the entry
is missing from m.dirty.
Otherwise, the entry is valid and recorded in m.read.m[key] and, if m.dirty
!= nil, in m.dirty[key].
An entry can be deleted by atomic replacement with nil: when m.dirty is
next created, it will atomically replace nil with expunged and leave
m.dirty[key] unset.
An entry's associated value can be updated by atomic replacement, provided
p != expunged. If p == expunged, an entry's associated value can be updated
only after first setting m.dirty[key] = e so that lookups using the dirty
map find the entry.(*entry) delete() (value any, ok bool)(*entry) load() (value any, ok bool) swapLocked unconditionally swaps a value into the entry.
The entry must be known not to be expunged. tryCompareAndSwap compare the entry with the given old value and swaps
it with a new value if the entry is equal to the old value, and the entry
has not been expunged.
If the entry is expunged, tryCompareAndSwap returns false and leaves
the entry unchanged.(*entry) tryExpungeLocked() (isExpunged bool) tryLoadOrStore atomically loads or stores a value if the entry is not
expunged.
If the entry is expunged, tryLoadOrStore leaves the entry unchanged and
returns with ok==false. trySwap swaps a value if the entry has not been expunged.
If the entry is expunged, trySwap returns false and leaves the entry
unchanged. unexpungeLocked ensures that the entry is not marked as expunged.
If the entry was previously expunged, it must be added to the dirty map
before m.mu is unlocked.
func newEntry(i any) *entry
noCopy may be added to structs which must not be copied
after the first use.
See https://golang.org/issues/8005#issuecomment-190753527
for details.
Note that it must not be embedded, due to the Lock and Unlock methods. Lock is a no-op used by -copylocks checker from `go vet`.(*noCopy) Unlock()
*noCopy : Locker
poolChain is a dynamically-sized version of poolDequeue.
This is implemented as a doubly-linked list queue of poolDequeues
where each dequeue is double the size of the previous one. Once a
dequeue fills up, this allocates a new one and only ever pushes to
the latest dequeue. Pops happen from the other end of the list and
once a dequeue is exhausted, it gets removed from the list. head is the poolDequeue to push to. This is only accessed
by the producer, so doesn't need to be synchronized. tail is the poolDequeue to popTail from. This is accessed
by consumers, so reads and writes must be atomic.(*poolChain) popHead() (any, bool)(*poolChain) popTail() (any, bool)(*poolChain) pushHead(val any)
next and prev link to the adjacent poolChainElts in this
poolChain.
next is written atomically by the producer and read
atomically by the consumer. It only transitions from nil to
non-nil.
prev is written atomically by the consumer and read
atomically by the producer. It only transitions from
non-nil to nil.poolDequeuepoolDequeue headTail packs together a 32-bit head index and a 32-bit
tail index. Both are indexes into vals modulo len(vals)-1.
tail = index of oldest data in queue
head = index of next slot to fill
Slots in the range [tail, head) are owned by consumers.
A consumer continues to own a slot outside this range until
it nils the slot, at which point ownership passes to the
producer.
The head index is stored in the most-significant bits so
that we can atomically add to it and the overflow is
harmless. vals is a ring buffer of interface{} values stored in this
dequeue. The size of this must be a power of 2.
vals[i].typ is nil if the slot is empty and non-nil
otherwise. A slot is still in use until *both* the tail
index has moved beyond it and typ has been set to nil. This
is set to nil atomically by the consumer and read
atomically by the producer. next and prev link to the adjacent poolChainElts in this
poolChain.
next is written atomically by the producer and read
atomically by the consumer. It only transitions from nil to
non-nil.
prev is written atomically by the consumer and read
atomically by the producer. It only transitions from
non-nil to nil.(*poolChainElt) pack(head, tail uint32) uint64 popHead removes and returns the element at the head of the queue.
It returns false if the queue is empty. It must only be called by a
single producer. popTail removes and returns the element at the tail of the queue.
It returns false if the queue is empty. It may be called by any
number of consumers. pushHead adds val at the head of the queue. It returns false if the
queue is full. It must only be called by a single producer.(*poolChainElt) unpack(ptrs uint64) (head, tail uint32)
func loadPoolChainElt(pp **poolChainElt) *poolChainElt
func loadPoolChainElt(pp **poolChainElt) *poolChainElt
func storePoolChainElt(pp **poolChainElt, v *poolChainElt)
func storePoolChainElt(pp **poolChainElt, v *poolChainElt)
poolDequeue is a lock-free fixed-size single-producer,
multi-consumer queue. The single producer can both push and pop
from the head, and consumers can pop from the tail.
It has the added feature that it nils out unused slots to avoid
unnecessary retention of objects. This is important for sync.Pool,
but not typically a property considered in the literature. headTail packs together a 32-bit head index and a 32-bit
tail index. Both are indexes into vals modulo len(vals)-1.
tail = index of oldest data in queue
head = index of next slot to fill
Slots in the range [tail, head) are owned by consumers.
A consumer continues to own a slot outside this range until
it nils the slot, at which point ownership passes to the
producer.
The head index is stored in the most-significant bits so
that we can atomically add to it and the overflow is
harmless. vals is a ring buffer of interface{} values stored in this
dequeue. The size of this must be a power of 2.
vals[i].typ is nil if the slot is empty and non-nil
otherwise. A slot is still in use until *both* the tail
index has moved beyond it and typ has been set to nil. This
is set to nil atomically by the consumer and read
atomically by the producer.(*poolDequeue) pack(head, tail uint32) uint64 popHead removes and returns the element at the head of the queue.
It returns false if the queue is empty. It must only be called by a
single producer. popTail removes and returns the element at the tail of the queue.
It returns false if the queue is empty. It may be called by any
number of consumers. pushHead adds val at the head of the queue. It returns false if the
queue is full. It must only be called by a single producer.(*poolDequeue) unpack(ptrs uint64) (head, tail uint32)
Prevents false sharing on widespread platforms with
128 mod (cache line size) = 0 .poolLocalInternalpoolLocalInternal // Can be used only by the respective P. // Local P can pushHead/popHead; any P can popTail.
func indexLocal(l unsafe.Pointer, i int) *poolLocal
func (*Pool).pin() (*poolLocal, int)
func (*Pool).pinSlow() (*poolLocal, int)
Local per-P Pool appendix. // Can be used only by the respective P. // Local P can pushHead/popHead; any P can popTail.
readOnly is an immutable struct stored atomically in the Map.read field. // true if the dirty map contains some key not in m.mmap[any]*entry
func (*Map).loadReadOnly() readOnly
// number of pending readers // semaphore for readers to wait for completing writers // number of departing readers // held if there are pending writers // semaphore for writers to wait for completing readers(*rlocker) Lock()(*rlocker) Unlock()
*rlocker : Locker
Package-Level Functions (total 34, in which 4 are exported)
NewCond returns a new Cond with Locker l.
OnceFunc returns a function that invokes f only once. The returned function
may be called concurrently.
If f panics, the returned function will panic with the same value on every call.
Type Parameters:
T: any OnceValue returns a function that invokes f only once and returns the value
returned by f. The returned function may be called concurrently.
If f panics, the returned function will panic with the same value on every call.
Type Parameters:
T1: any
T2: any OnceValues returns a function that invokes f only once and returns the values
returned by f. The returned function may be called concurrently.
If f panics, the returned function will panic with the same value on every call.
poolRaceAddr returns an address to use as the synchronization point
for race detector logic. We don't use the actual pointer stored in x
directly, for fear of conflicting with other synchronization on that address.
Instead, we hash the pointer to get an index into poolRaceHash.
See discussion on golang.org/cl/31589.
Active spinning runtime support.
runtime_canSpin reports whether spinning makes sense at the moment.
Semacquire waits until *s > 0 and then atomically decrements it.
It is intended as a simple sleep primitive for use by the synchronization
library and should not be used directly.
Semacquire(RW)Mutex(R) is like Semacquire, but for profiling contended
Mutexes and RWMutexes.
If lifo is true, queue waiter at the head of wait queue.
skipframes is the number of frames to omit during tracing, counting from
runtime_SemacquireMutex's caller.
The different forms of this function just tell the runtime how to present
the reason for waiting in a backtrace, and is used to compute some metrics.
Otherwise they're functionally identical.
Semrelease atomically increments *s and notifies a waiting goroutine
if one is blocked in Semacquire.
It is intended as a simple wakeup primitive for use by the synchronization
library and should not be used directly.
If handoff is true, pass count directly to the first waiter.
skipframes is the number of frames to omit during tracing, counting from
runtime_Semrelease's caller.
syscall_hasWaitingReaders reports whether any goroutine is waiting
to acquire a read lock on rw. This exists because syscall.ForkLock
is an RWMutex, and we can't change that without breaking compatibility.
We don't need or want RWMutex semantics for ForkLock, and we use
this private API to avoid having to change the type of ForkLock.
For more details see the syscall package.
Provided by runtime via linkname.
Package-Level Variables (total 5, none are exported)
allPools is the set of pools that have non-empty primary
caches. Protected by either 1) allPoolsMu and pinning or 2)
STW.
dequeueLimit is the maximum size of a poolDequeue.
This must be at most (1<<dequeueBits)/2 because detecting fullness
depends on wrapping around the ring buffer without wrapping around
the index. We divide by 4 so this fits in an int on 32-bit.
Mutex fairness.
Mutex can be in 2 modes of operations: normal and starvation.
In normal mode waiters are queued in FIFO order, but a woken up waiter
does not own the mutex and competes with new arriving goroutines over
the ownership. New arriving goroutines have an advantage -- they are
already running on CPU and there can be lots of them, so a woken up
waiter has good chances of losing. In such case it is queued at front
of the wait queue. If a waiter fails to acquire the mutex for more than 1ms,
it switches mutex to the starvation mode.
In starvation mode ownership of the mutex is directly handed off from
the unlocking goroutine to the waiter at the front of the queue.
New arriving goroutines don't try to acquire the mutex even if it appears
to be unlocked, and don't try to spin. Instead they queue themselves at
the tail of the wait queue.
If a waiter receives ownership of the mutex and sees that either
(1) it is the last waiter in the queue, or (2) it waited for less than 1 ms,
it switches mutex back to normal operation mode.
Normal mode has considerably better performance as a goroutine can acquire
a mutex several times in a row even if there are blocked waiters.
Starvation mode is important to prevent pathological cases of tail latency.
The pages are generated with Goldsv0.6.7. (GOOS=linux GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.