A ReadError reports an error encountered while reading input.
Deprecated: No longer returned.
Resetter resets a ReadCloser returned by NewReader or NewReaderDict to
to switch to a new underlying Reader. This permits reusing a ReadCloser
instead of allocating a new one. Reset discards any buffered data and resets the Resetter as if it was
newly initialized with the given reader.
compress/flate.Resetter(interface)
*decompressor
*compress/flate.decompressor
Resetter : compress/flate.Resetter
A WriteError reports an error encountered while writing output.
Deprecated: No longer returned.
A Writer takes data written to it and writes the compressed
form of that data to an underlying writer (see NewWriter).dcompressordict[]byte Close flushes and closes the writer. Flush flushes any pending data to the underlying writer.
It is useful mainly in compressed network protocols, to ensure that
a remote reader has enough data to reconstruct a packet.
Flush does not return until the data has been written.
Calling Flush when there is no pending data still causes the Writer
to emit a sync marker of at least 4 bytes.
If the underlying writer returns an error, Flush returns that error.
In the terminology of the zlib library, Flush is equivalent to Z_SYNC_FLUSH. Reset discards the writer's state and makes it equivalent to
the result of NewWriter or NewWriterDict called with dst
and w's level and dictionary. ResetDict discards the writer's state and makes it equivalent to
the result of NewWriter or NewWriterDict called with dst
and w's level, but sets a specific dictionary. Write writes data to w, which will eventually write the
compressed form of data to its underlying writer.
*Writer : internal/bisect.Writer
*Writer : io.Closer
*Writer : io.WriteCloser
*Writer : io.Writer
*Writer : crypto/tls.transcriptHash
func NewWriter(w io.Writer, level int) (*Writer, error)
func NewWriterDict(w io.Writer, level int, dict []byte) (*Writer, error)
func NewWriterWindow(w io.Writer, windowSize int) (*Writer, error)
advancedState contains state for the advanced levels, with bigger hash tables, etc.chainHeadint Input hash chains
hashHead[hashValue] contains the largest inputIndex with the specified hash value
If hashHead[hashValue] is within the current window, then
hashPrev[hashHead[hashValue] & windowMask] contains the previous index
with the same hash value.hashMatch[262]uint32hashOffsetinthashPrev[32768]uint32 // position of last match, intended to overflow to reset. input window: unprocessed data is window[index:windowEnd] deflate statemaxInsertIndexintoffsetint
// window index where current tokens start // if true, still need to process window[index-1].compressionLevelcompressionLevelcompressionLevel.chainintcompressionLevel.fastSkipHashingintcompressionLevel.goodintcompressionLevel.lazyintcompressionLevel.levelintcompressionLevel.niceinterrerrorfastfastEnc compression algorithm // copy data to windowh*huffmanEncoderstate*advancedState // process window // requesting flush queued output tokensw*huffmanBitWriterwindow[]bytewindowEndint(*compressor) close() error deflateLazy is the same as deflate, but with d.fastSkipHashing == skipNever,
meaning it always has lazy matching on. fillWindow will fill the buffer with data for huffman-only compression.
The number of bytes copied is returned.(*compressor) fillDeflate(b []byte) int fillWindow will fill the current window with the supplied
dictionary and calculate all hashes.
This is much faster than doing a full encode.
Should only be used after a start/reset. Try to find a match starting at index whose length is greater than prevSize.
We only look at chainCount possibilities before giving up.
pos = s.index, prevHead = s.chainHead-s.hashOffset, prevLength=minMatchLength-1, lookahead(*compressor) init(w io.Writer, level int) (err error)(*compressor) initDeflate() reset the state of the compressor.(*compressor) store() storeFast will compress and store the currently added data,
if enough has been accumulated or we at the end of the stream.
Any error that occurred will be in d.err storeHuff will compress and store the currently added data,
if enough has been accumulated or we at the end of the stream.
Any error that occurred will be in d.err(*compressor) syncFlush() error write will add input byte to the stream.
Unless an error occurs all bytes will be consumed.(*compressor) writeBlock(tok *tokens, index int, eof bool) error writeBlockSkip writes the current block and uses the number of tokens
to determine if the block should be stored on no matches, or
only huffman encoded.(*compressor) writeStoredBlock(buf []byte) error
Decompress state. Input bits, in top of b. Length arrays used to define Huffman codes. Temporary buffer (avoids repeated allocation).codebits*[19]intcopyDistintcopyLenint Output history, buffer.errerrorfinalbool Huffman decoders for literal/length, distance. Huffman decoders for literal/length, distance.hd*huffmanDecoderhl*huffmanDecodernbuint Input source.roffsetint64 Next step in the decompression,
and decompression state.stepStateinttoRead[]byte(*decompressor) Close() error(*decompressor) Read(b []byte) (int, error)(*decompressor) Reset(r io.Reader, dict []byte) error WriteTo implements the io.WriteTo interface for io.Copy and friends. copyData copies f.copyLen bytes from the underlying reader into f.hist.
It pauses for reads when f.hist is full. Copy a single uncompressed data block from input to output.(*decompressor) doStep()(*decompressor) finishBlock() Read the next Huffman-encoded symbol from f according to h.(*decompressor) huffmanBlockDecoder() Decode a single Huffman block from f.
hl and hd are the Huffman states for the lit/length values
and the distance values, respectively. If hd == nil, using the
fixed distance encoding associated with fixed Huffman blocks. Decode a single Huffman block from f.
hl and hd are the Huffman states for the lit/length values
and the distance values, respectively. If hd == nil, using the
fixed distance encoding associated with fixed Huffman blocks. Decode a single Huffman block from f.
hl and hd are the Huffman states for the lit/length values
and the distance values, respectively. If hd == nil, using the
fixed distance encoding associated with fixed Huffman blocks. Decode a single Huffman block from f.
hl and hd are the Huffman states for the lit/length values
and the distance values, respectively. If hd == nil, using the
fixed distance encoding associated with fixed Huffman blocks. Decode a single Huffman block from f.
hl and hd are the Huffman states for the lit/length values
and the distance values, respectively. If hd == nil, using the
fixed distance encoding associated with fixed Huffman blocks.(*decompressor) moreBits() error(*decompressor) nextBlock()(*decompressor) readHuffman() error
*decompressor : Resetter
*decompressor : compress/flate.Resetter
*decompressor : io.Closer
*decompressor : io.ReadCloser
*decompressor : io.Reader
*decompressor : io.WriterTo
dictDecoder implements the LZ77 sliding dictionary as used in decompression.
LZ77 decompresses data through sequences of two forms of commands:
- Literal insertions: Runs of one or more symbols are inserted into the data
stream as is. This is accomplished through the writeByte method for a
single symbol, or combinations of writeSlice/writeMark for multiple symbols.
Any valid stream must start with a literal insertion if no preset dictionary
is used.
- Backward copies: Runs of one or more symbols are copied from previously
emitted data. Backward copies come as the tuple (dist, length) where dist
determines how far back in the stream to copy from and length determines how
many bytes to copy. Note that it is valid for the length to be greater than
the distance. Since LZ77 uses forward copies, that situation is used to
perform a form of run-length encoding on repeated runs of symbols.
The writeCopy and tryWriteCopy are used to implement this command.
For performance reasons, this implementation performs little to no sanity
checks about the arguments. As such, the invariants documented for each
method call must be respected. // Has a full window length been written yet? // Sliding window history // Have emitted hist[:rdPos] already Invariant: 0 <= rdPos <= wrPos <= len(hist) // Current output position in buffer availRead reports the number of bytes that can be flushed by readFlush. availWrite reports the available amount of output buffer space. histSize reports the total amount of historical data in the dictionary. init initializes dictDecoder to have a sliding window dictionary of the given
size. If a preset dict is provided, it will initialize the dictionary with
the contents of dict. readFlush returns a slice of the historical buffer that is ready to be
emitted to the user. The data returned by readFlush must be fully consumed
before calling any other dictDecoder methods. tryWriteCopy tries to copy a string at a given (distance, length) to the
output. This specialized version is optimized for short distances.
This method is designed to be inlined for performance reasons.
This invariant must be kept: 0 < dist <= histSize() writeByte writes a single byte to the dictionary.
This invariant must be kept: 0 < availWrite() writeCopy copies a string at a given (dist, length) to the output.
This returns the number of bytes copied and may be less than the requested
length if the available space in the output buffer is too small.
This invariant must be kept: 0 < dist <= histSize() writeMark advances the writer pointer by cnt.
This invariant must be kept: 0 <= cnt <= availWrite() writeSlice returns a slice of the available buffer to write data to.
This invariant will be kept: len(s) <= availWrite()
fastGen maintains the table for matches,
and the previous byte block for level 2.
This is the generic implementation.fastGenfastGenfastGen.curint32fastGen.hist[]bytetable[32768]tableEntry EncodeL1 uses a similar algorithm to level 1 Reset the encoding table.(*fastEncL1) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL1 : fastEnc
fastGen maintains the table for matches,
and the previous byte block for level 2.
This is the generic implementation.fastGenfastGenfastGen.curint32fastGen.hist[]bytetable[131072]tableEntry EncodeL2 uses a similar algorithm to level 1, but is capable
of matching across blocks giving better compression at a small slowdown. Reset the encoding table.(*fastEncL2) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL2 : fastEnc
fastEncL3fastGenfastGenfastGen.curint32fastGen.hist[]bytetable[65536]tableEntryPrev Encode uses a similar algorithm to level 2, will check up to two candidates. Reset the encoding table.(*fastEncL3) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL3 : fastEnc
bTable[32768]tableEntryfastGenfastGenfastGen.curint32fastGen.hist[]bytetable[32768]tableEntry(*fastEncL4) Encode(dst *tokens, src []byte) Reset the encoding table.(*fastEncL4) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL4 : fastEnc
bTable[32768]tableEntryPrevfastGenfastGenfastGen.curint32fastGen.hist[]bytetable[32768]tableEntry(*fastEncL5) Encode(dst *tokens, src []byte) Reset the encoding table.(*fastEncL5) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL5 : fastEnc
fastEncL5Window is a level 5 encoder,
but with a custom window size.bTable[32768]tableEntryPrevcurint32hist[]bytemaxOffsetint32table[32768]tableEntry(*fastEncL5Window) Encode(dst *tokens, src []byte) Reset the encoding table.(*fastEncL5Window) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL5Window : fastEnc
bTable[32768]tableEntryPrevfastGenfastGenfastGen.curint32fastGen.hist[]bytetable[32768]tableEntry(*fastEncL6) Encode(dst *tokens, src []byte) Reset the encoding table.(*fastEncL6) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
*fastEncL6 : fastEnc
fastGen maintains the table for matches,
and the previous byte block for level 2.
This is the generic implementation.curint32hist[]byte Reset the encoding table.(*fastGen) addBlock(src []byte) int32 matchlen will return the match length between offsets and t in src.
The maximum length returned is maxMatchLength - 4.
It is assumed that s > t, that t >=0 and s < len(src). matchlenLong will return the match length between offsets and t in src.
It is assumed that s > t, that t >=0 and s < len(src).
hcode is a huffman code with a bit code and bit length.( hcode) code64() uint64( hcode) len() uint8 set sets the code and length of an hcode.( hcode) zero() bool
func newhcode(code uint16, length uint8) hcode
Data waiting to be written is bytes[0:nbytes]
and then the low nbits of bits.bytes[264]byte codegen must have an extra space for the final symbol.codegenEncoding*huffmanEncodercodegenFreq[19]uint16errerrorlastHeaderintlastHuffManboolliteralEncoding*huffmanEncoderliteralFreq[289]uint16 Set between 0 (reused block can be up to 2x the size)nbitsuint8nbytesuint8offsetEncoding*huffmanEncoderoffsetFreq[32]uint16tmpLitEncoding*huffmanEncoder writer is the underlying writer.
Do not use it directly; use the write method, which ensures
that Write errors are sticky.(*huffmanBitWriter) canReuse(t *tokens) (ok bool)(*huffmanBitWriter) codegens() int dynamicSize returns the size of dynamically encoded data in bits. dynamicSize returns the size of dynamically encoded data in bits. extraBitSize will return the number of bits that will be written
as "extra" bits on matches.(*huffmanBitWriter) fillTokens() fixedSize returns the size of dynamically encoded data in bits.(*huffmanBitWriter) flush()(*huffmanBitWriter) generate() RFC 1951 3.2.7 specifies a special run-length encoding for specifying
the literal and offset lengths arrays (which are concatenated into a single
array). This method generates that run-length encoding.
The result is written into the codegen array, and the frequencies
of each code is written into the codegenFreq array.
Codes 0-15 are single byte codes. Codes 16-18 are followed by additional
information. Code badCode is an end marker
numLiterals The number of literals in literalEncoding
numOffsets The number of offsets in offsetEncoding
litenc, offenc The literal and offset encoder to use(*huffmanBitWriter) headerSize() (size, numCodegens int) indexTokens indexes a slice of tokens, and updates
literalFreq and offsetFreq, and generates literalEncoding
and offsetEncoding.
The number of literal and offset tokens is returned.(*huffmanBitWriter) reset(writer io.Writer) storedSize calculates the stored size, including header.
The function returns the size in bits and whether the block
fits inside a single block.(*huffmanBitWriter) write(b []byte)(*huffmanBitWriter) writeBits(b int32, nb uint8) writeBlock will write a block of tokens with the smallest encoding.
The original input can be supplied, and if the huffman encoded data
is larger than the original bytes, the data will be written as a
stored block.
If the input is nil, the tokens will always be Huffman encoded. writeBlockDynamic encodes a block using a dynamic Huffman table.
This should be used if the symbols used have a disproportionate
histogram distribution.
If input is supplied and the compression savings are below 1/16th of the
input size the block is stored. writeBlockHuff encodes a block of bytes as either
Huffman encoded literals or uncompressed bytes if the
results only gains very little from compression.(*huffmanBitWriter) writeBytes(bytes []byte)(*huffmanBitWriter) writeCode(c hcode) Write the header of a dynamic Huffman block to the output stream.
numLiterals The number of literals specified in codegen
numOffsets The number of offsets specified in codegen
numCodegens The number of codegens used in codegen(*huffmanBitWriter) writeFixedHeader(isEof bool) writeOutBits will write bits to the buffer. writeStoredHeader will write a stored header.
If the stored block is only used for EOF,
it is replaced with a fixed huffman block. writeTokens writes a slice of tokens to the output.
codes for literal and offset encoding must be supplied.
func newHuffmanBitWriter(w io.Writer) *huffmanBitWriter
// chunks as described above // mask the width of the link table // overflow links // the maximum number of bits we can read and not overread Initialize Huffman decoding tables from array of code lengths.
Following this function, h is guaranteed to be initialized into a complete
tree (i.e., neither over-subscribed nor under-subscribed). The exception is a
degenerate case where the tree has only a single symbol with length 1. Empty
trees are permitted.
var fixedHuffmanDecoder
bitCount[17]int32codes[]hcode Allocate a reusable buffer with the longest possible frequency table.
Possible lengths are codegenCodeCount, offsetCodeCount and literalCount.
The largest of these is literalCount, so we allocate for that case. Look at the leaves and assign them a bit count and an encoding as specified
in RFC 1951 3.2.2 Return the number of literals assigned to each bit size in the Huffman encoding
This method is only called when list.length >= 3
The cases of 0, 1, and 2 literals are handled by special case code.
list An array of the literals with non-zero frequencies
and their associated frequencies. The array is in order of increasing
frequency, and has as its last element a special element with frequency
MaxInt32
maxBits The maximum number of bits that should be used to encode any literal.
Must be less than 16.
return An integer array in which array[i] indicates the number of literals
that should be encoded in i bits.(*huffmanEncoder) bitLength(freq []uint16) int(*huffmanEncoder) bitLengthRaw(b []byte) int canReuseBits returns the number of bits or math.MaxInt32 if the encoder cannot be reused. Update this Huffman Code object to be the minimum code for the specified frequency count.
freq An array of frequencies, in which frequency[i] gives the frequency of literal i.
maxBits The maximum number of bits to use for any literal.
func generateFixedLiteralEncoding() *huffmanEncoder
func generateFixedOffsetEncoding() *huffmanEncoder
func newHuffmanEncoder(size int) *huffmanEncoder
var fixedLiteralEncoding *huffmanEncoder
var fixedOffsetEncoding *huffmanEncoder
var huffOffset *huffmanEncoder
A levelInfo describes the state of the constructed tree for a given depth. The frequency of the last node at this level Our level. for better printing The number of chains remaining to generate for this level before moving
up to the next level The frequency of the next character to add to this level The frequency of the next pair (from level below) to add to this level.
Only valid if the "needed" value of the next lower level is 0.
( token) length() uint8 Returns the literal of a literal token Returns the extra offset of a match token Returns the type of a token
func indexTokens(in []token) tokens
// codes 256->maxnumlit // codes 0->255 // Must be able to contain maxStoreBlockSizenFilledint // offset codestokens[65536]token(*tokens) AddEOB()(*tokens) AddLiteral(lit byte) AddMatch adds a match to the tokens.
This function is very sensitive to inlining and right on the border. AddMatchLong adds a match to the tokens, potentially longer than max match length.
Length should NOT have the base subtracted, only offset should. EstimatedBits will return an minimum size estimated by an *optimal*
compression of the block.
The size of the block(*tokens) Fill() FromVarInt restores t to the varint encoded tokens provided.
Any data in t is removed.(*tokens) Reset()(*tokens) Slice() []token VarInt returns the tokens as varint encoded bytes.(*tokens) indexTokens(in []token)
func indexTokens(in []token) tokens
func emitLiteral(dst *tokens, lit []byte)
func statelessEnc(dst *tokens, src []byte, startAt int16)
Package-Level Functions (total 53, in which 7 are exported)
NewReader returns a new ReadCloser that can be used
to read the uncompressed version of r.
If r does not also implement io.ByteReader,
the decompressor may read more data than necessary from r.
It is the caller's responsibility to call Close on the ReadCloser
when finished reading.
The ReadCloser returned by NewReader also implements Resetter.
NewReaderDict is like NewReader but initializes the reader
with a preset dictionary. The returned Reader behaves as if
the uncompressed data stream started with the given dictionary,
which has already been read. NewReaderDict is typically used
to read data compressed by NewWriterDict.
The ReadCloser returned by NewReader also implements Resetter.
NewStatelessWriter will do compression but without maintaining any state
between Write calls.
There will be no memory kept between Write calls,
but compression and speed will be suboptimal.
Because of this, the size of actual Write calls will affect output size.
NewWriter returns a new Writer compressing data at the given level.
Following zlib, levels range from 1 (BestSpeed) to 9 (BestCompression);
higher levels typically run slower but compress more.
Level 0 (NoCompression) does not attempt any compression; it only adds the
necessary DEFLATE framing.
Level -1 (DefaultCompression) uses the default compression level.
Level -2 (ConstantCompression) will use Huffman compression only, giving
a very fast compression for all types of input, but sacrificing considerable
compression efficiency.
If level is in the range [-2, 9] then the error returned will be nil.
Otherwise the error returned will be non-nil.
NewWriterDict is like NewWriter but initializes the new
Writer with a preset dictionary. The returned Writer behaves
as if the dictionary had been written to it without producing
any compressed output. The compressed data written to w
can only be decompressed by a Reader initialized with the
same dictionary.
NewWriterWindow returns a new Writer compressing data with a custom window size.
windowSize must be from MinCustomWindowSize to MaxCustomWindowSize.
StatelessDeflate allows compressing directly to a Writer without retaining state.
When returning everything will be flushed.
Up to 8KB of an optional dictionary can be given which is presumed to precede the block.
Longer dictionaries will be truncated and will still produce valid output.
Sending nil dictionary is perfectly fine.
atLeastOne clamps the result between 1 and 15.
bulkHash4 will compute hashes using the same
algorithm as hash4
hash4 returns a hash representation of the first 4 bytes
of the supplied slice.
The caller must ensure that len(b) >= 4.
hash4 returns the hash of u to fit in a hash table with h bits.
Preferably h should be a constant and should always be <32.
hash7 returns the hash of the lowest 7 bytes of u to fit in a hash table with h bits.
Preferably h should be a constant and should always be <64.
hashLen returns a hash of the lowest mls bytes of with length output bits.
mls must be >=3 and <=8. Any other value will return hash for 4 bytes.
length should always be < 32.
Preferably length and mls should be a constant for inlining.
siftDown implements the heap property on data[lo, hi).
first is an offset into the array where the root of the heap lies.
Sort sorts data.
It makes one call to data.Len to determine n, and O(n*log(n)) calls to
data.Less and data.Swap. The sort is not guaranteed to be stable.
Sort sorts data.
It makes one call to data.Len to determine n, and O(n*log(n)) calls to
data.Less and data.Swap. The sort is not guaranteed to be stable.
Initialize the fixedHuffmanDecoder only once upon first use.
huffOffset is a static offset encoder used for huffman only encoding.
It can be reused since we will not be encoding offset values.
The length indicated by length code X - LENGTH_CODES_START.
The length code for length X (MIN_MATCH_LENGTH <= X <= MAX_MATCH_LENGTH)
is lengthCodes[length - MIN_MATCH_LENGTH]
lengthCodes1 is length codes, but starting at 1.
The number of extra bits needed by length code X - LENGTH_CODES_START.
Compression levels have been rebalanced from zlib deflate defaults
to give a bigger spread in speed and compression.
See https://blog.klauspost.com/rebalancing-deflate-compression-levels/
HuffmanOnly disables Lempel-Ziv match searching and only performs Huffman
entropy encoding. This mode is useful in compressing data that has
already been compressed with an LZ style algorithm (e.g. Snappy or LZ4)
that lacks an entropy encoder. Compression gains are achieved when
certain bytes in the input stream occur more frequently than others.
Note that HuffmanOnly produces a compressed output that is
RFC 1951 compliant. That is, any valid DEFLATE decompressor will
continue to be able to decompress this output.
MaxCustomWindowSize is the maximum custom window that can be sent to NewWriterWindow.
MinCustomWindowSize is the minimum window size that can be sent to NewWriterWindow.
bufferFlushSize indicates the buffer size
after which bytes are flushed to the writer.
Should preferably be a multiple of 6, since
we accumulate 6 bytes between writes to the buffer.
constbufferReset = 2147090437 // Reset the buffer offset when reaching this.
The maximum number of tokens we will encode at the time.
Smaller sizes usually creates less optimal blocks.
Bigger can make context switching slow.
We use this for levels 7-9, so we make it big.
The next three numbers come from the RFC section 3.2.7, with the
additional proviso in section 3.2.5 which implies that distance codes
30 and 31 should never occur in compressed data.
maxPredefinedTokens is the maximum number of tokens
where we check if fixed size is smaller.
The pages are generated with Goldsv0.6.7. (GOOS=linux GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @Go100and1 (reachable from the left QR code) to get the latest news of Golds.