[feature] S3 support (#674)

* feat: vendor minio client

* feat: introduce storage package with s3 support

* feat: serve s3 files directly

this saves a lot of bandwith as the files are fetched from the object
store directly

* fix: use explicit local storage in tests

* feat: integrate s3 storage with the main server

* fix: add s3 config to cli tests

* docs: explicitly set values in example config

also adds license header to the storage package

* fix: use better http status code on s3 redirect

HTTP 302 Found is the best fit, as it signifies that the resource
requested was found but not under its presumed URL

307/TemporaryRedirect would mean that this resource is usually located
here, not in this case

303/SeeOther indicates that the redirection does not link to the
requested resource but to another page

* refactor: use context in storage driver interface
This commit is contained in:
Dominik Süß
2022-07-03 12:08:30 +02:00
committed by GitHub
parent 07b0a42b7f
commit 9d0df426da
250 changed files with 77798 additions and 185 deletions

21
vendor/github.com/dustin/go-humanize/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,21 @@
sudo: false
language: go
go:
- 1.3.x
- 1.5.x
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- master
matrix:
allow_failures:
- go: master
fast_finish: true
install:
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
script:
- go get -t -v ./...
- diff -u <(echo -n) <(gofmt -d -s .)
- go tool vet .
- go test -v -race ./...

21
vendor/github.com/dustin/go-humanize/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
Copyright (c) 2005-2008 Dustin Sallings <dustin@spy.net>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
<http://www.opensource.org/licenses/mit-license.php>

124
vendor/github.com/dustin/go-humanize/README.markdown generated vendored Normal file
View File

@ -0,0 +1,124 @@
# Humane Units [![Build Status](https://travis-ci.org/dustin/go-humanize.svg?branch=master)](https://travis-ci.org/dustin/go-humanize) [![GoDoc](https://godoc.org/github.com/dustin/go-humanize?status.svg)](https://godoc.org/github.com/dustin/go-humanize)
Just a few functions for helping humanize times and sizes.
`go get` it as `github.com/dustin/go-humanize`, import it as
`"github.com/dustin/go-humanize"`, use it as `humanize`.
See [godoc](https://godoc.org/github.com/dustin/go-humanize) for
complete documentation.
## Sizes
This lets you take numbers like `82854982` and convert them to useful
strings like, `83 MB` or `79 MiB` (whichever you prefer).
Example:
```go
fmt.Printf("That file is %s.", humanize.Bytes(82854982)) // That file is 83 MB.
```
## Times
This lets you take a `time.Time` and spit it out in relative terms.
For example, `12 seconds ago` or `3 days from now`.
Example:
```go
fmt.Printf("This was touched %s.", humanize.Time(someTimeInstance)) // This was touched 7 hours ago.
```
Thanks to Kyle Lemons for the time implementation from an IRC
conversation one day. It's pretty neat.
## Ordinals
From a [mailing list discussion][odisc] where a user wanted to be able
to label ordinals.
0 -> 0th
1 -> 1st
2 -> 2nd
3 -> 3rd
4 -> 4th
[...]
Example:
```go
fmt.Printf("You're my %s best friend.", humanize.Ordinal(193)) // You are my 193rd best friend.
```
## Commas
Want to shove commas into numbers? Be my guest.
0 -> 0
100 -> 100
1000 -> 1,000
1000000000 -> 1,000,000,000
-100000 -> -100,000
Example:
```go
fmt.Printf("You owe $%s.\n", humanize.Comma(6582491)) // You owe $6,582,491.
```
## Ftoa
Nicer float64 formatter that removes trailing zeros.
```go
fmt.Printf("%f", 2.24) // 2.240000
fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24
fmt.Printf("%f", 2.0) // 2.000000
fmt.Printf("%s", humanize.Ftoa(2.0)) // 2
```
## SI notation
Format numbers with [SI notation][sinotation].
Example:
```go
humanize.SI(0.00000000223, "M") // 2.23 nM
```
## English-specific functions
The following functions are in the `humanize/english` subpackage.
### Plurals
Simple English pluralization
```go
english.PluralWord(1, "object", "") // object
english.PluralWord(42, "object", "") // objects
english.PluralWord(2, "bus", "") // buses
english.PluralWord(99, "locus", "loci") // loci
english.Plural(1, "object", "") // 1 object
english.Plural(42, "object", "") // 42 objects
english.Plural(2, "bus", "") // 2 buses
english.Plural(99, "locus", "loci") // 99 loci
```
### Word series
Format comma-separated words lists with conjuctions:
```go
english.WordSeries([]string{"foo"}, "and") // foo
english.WordSeries([]string{"foo", "bar"}, "and") // foo and bar
english.WordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar and baz
english.OxfordWordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar, and baz
```
[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion
[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix

31
vendor/github.com/dustin/go-humanize/big.go generated vendored Normal file
View File

@ -0,0 +1,31 @@
package humanize
import (
"math/big"
)
// order of magnitude (to a max order)
func oomm(n, b *big.Int, maxmag int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
if mag == maxmag && maxmag >= 0 {
break
}
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}
// total order of magnitude
// (same as above, but with no upper limit)
func oom(n, b *big.Int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}

173
vendor/github.com/dustin/go-humanize/bigbytes.go generated vendored Normal file
View File

@ -0,0 +1,173 @@
package humanize
import (
"fmt"
"math/big"
"strings"
"unicode"
)
var (
bigIECExp = big.NewInt(1024)
// BigByte is one byte in bit.Ints
BigByte = big.NewInt(1)
// BigKiByte is 1,024 bytes in bit.Ints
BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp)
// BigMiByte is 1,024 k bytes in bit.Ints
BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp)
// BigGiByte is 1,024 m bytes in bit.Ints
BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp)
// BigTiByte is 1,024 g bytes in bit.Ints
BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp)
// BigPiByte is 1,024 t bytes in bit.Ints
BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp)
// BigEiByte is 1,024 p bytes in bit.Ints
BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp)
// BigZiByte is 1,024 e bytes in bit.Ints
BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp)
// BigYiByte is 1,024 z bytes in bit.Ints
BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp)
)
var (
bigSIExp = big.NewInt(1000)
// BigSIByte is one SI byte in big.Ints
BigSIByte = big.NewInt(1)
// BigKByte is 1,000 SI bytes in big.Ints
BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp)
// BigMByte is 1,000 SI k bytes in big.Ints
BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp)
// BigGByte is 1,000 SI m bytes in big.Ints
BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp)
// BigTByte is 1,000 SI g bytes in big.Ints
BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp)
// BigPByte is 1,000 SI t bytes in big.Ints
BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp)
// BigEByte is 1,000 SI p bytes in big.Ints
BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp)
// BigZByte is 1,000 SI e bytes in big.Ints
BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp)
// BigYByte is 1,000 SI z bytes in big.Ints
BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp)
)
var bigBytesSizeTable = map[string]*big.Int{
"b": BigByte,
"kib": BigKiByte,
"kb": BigKByte,
"mib": BigMiByte,
"mb": BigMByte,
"gib": BigGiByte,
"gb": BigGByte,
"tib": BigTiByte,
"tb": BigTByte,
"pib": BigPiByte,
"pb": BigPByte,
"eib": BigEiByte,
"eb": BigEByte,
"zib": BigZiByte,
"zb": BigZByte,
"yib": BigYiByte,
"yb": BigYByte,
// Without suffix
"": BigByte,
"ki": BigKiByte,
"k": BigKByte,
"mi": BigMiByte,
"m": BigMByte,
"gi": BigGiByte,
"g": BigGByte,
"ti": BigTiByte,
"t": BigTByte,
"pi": BigPiByte,
"p": BigPByte,
"ei": BigEiByte,
"e": BigEByte,
"z": BigZByte,
"zi": BigZiByte,
"y": BigYByte,
"yi": BigYiByte,
}
var ten = big.NewInt(10)
func humanateBigBytes(s, base *big.Int, sizes []string) string {
if s.Cmp(ten) < 0 {
return fmt.Sprintf("%d B", s)
}
c := (&big.Int{}).Set(s)
val, mag := oomm(c, base, len(sizes)-1)
suffix := sizes[mag]
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// BigBytes produces a human readable representation of an SI size.
//
// See also: ParseBigBytes.
//
// BigBytes(82854982) -> 83 MB
func BigBytes(s *big.Int) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"}
return humanateBigBytes(s, bigSIExp, sizes)
}
// BigIBytes produces a human readable representation of an IEC size.
//
// See also: ParseBigBytes.
//
// BigIBytes(82854982) -> 79 MiB
func BigIBytes(s *big.Int) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"}
return humanateBigBytes(s, bigIECExp, sizes)
}
// ParseBigBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See also: BigBytes, BigIBytes.
//
// ParseBigBytes("42 MB") -> 42000000, nil
// ParseBigBytes("42 mib") -> 44040192, nil
func ParseBigBytes(s string) (*big.Int, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
val := &big.Rat{}
_, err := fmt.Sscanf(num, "%f", val)
if err != nil {
return nil, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bigBytesSizeTable[extra]; ok {
mv := (&big.Rat{}).SetInt(m)
val.Mul(val, mv)
rv := &big.Int{}
rv.Div(val.Num(), val.Denom())
return rv, nil
}
return nil, fmt.Errorf("unhandled size name: %v", extra)
}

143
vendor/github.com/dustin/go-humanize/bytes.go generated vendored Normal file
View File

@ -0,0 +1,143 @@
package humanize
import (
"fmt"
"math"
"strconv"
"strings"
"unicode"
)
// IEC Sizes.
// kibis of bits
const (
Byte = 1 << (iota * 10)
KiByte
MiByte
GiByte
TiByte
PiByte
EiByte
)
// SI Sizes.
const (
IByte = 1
KByte = IByte * 1000
MByte = KByte * 1000
GByte = MByte * 1000
TByte = GByte * 1000
PByte = TByte * 1000
EByte = PByte * 1000
)
var bytesSizeTable = map[string]uint64{
"b": Byte,
"kib": KiByte,
"kb": KByte,
"mib": MiByte,
"mb": MByte,
"gib": GiByte,
"gb": GByte,
"tib": TiByte,
"tb": TByte,
"pib": PiByte,
"pb": PByte,
"eib": EiByte,
"eb": EByte,
// Without suffix
"": Byte,
"ki": KiByte,
"k": KByte,
"mi": MiByte,
"m": MByte,
"gi": GiByte,
"g": GByte,
"ti": TiByte,
"t": TByte,
"pi": PiByte,
"p": PByte,
"ei": EiByte,
"e": EByte,
}
func logn(n, b float64) float64 {
return math.Log(n) / math.Log(b)
}
func humanateBytes(s uint64, base float64, sizes []string) string {
if s < 10 {
return fmt.Sprintf("%d B", s)
}
e := math.Floor(logn(float64(s), base))
suffix := sizes[int(e)]
val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// Bytes produces a human readable representation of an SI size.
//
// See also: ParseBytes.
//
// Bytes(82854982) -> 83 MB
func Bytes(s uint64) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB"}
return humanateBytes(s, 1000, sizes)
}
// IBytes produces a human readable representation of an IEC size.
//
// See also: ParseBytes.
//
// IBytes(82854982) -> 79 MiB
func IBytes(s uint64) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"}
return humanateBytes(s, 1024, sizes)
}
// ParseBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See Also: Bytes, IBytes.
//
// ParseBytes("42 MB") -> 42000000, nil
// ParseBytes("42 mib") -> 44040192, nil
func ParseBytes(s string) (uint64, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
f, err := strconv.ParseFloat(num, 64)
if err != nil {
return 0, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bytesSizeTable[extra]; ok {
f *= float64(m)
if f >= math.MaxUint64 {
return 0, fmt.Errorf("too large: %v", s)
}
return uint64(f), nil
}
return 0, fmt.Errorf("unhandled size name: %v", extra)
}

116
vendor/github.com/dustin/go-humanize/comma.go generated vendored Normal file
View File

@ -0,0 +1,116 @@
package humanize
import (
"bytes"
"math"
"math/big"
"strconv"
"strings"
)
// Comma produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Comma(834142) -> 834,142
func Comma(v int64) string {
sign := ""
// Min int64 can't be negated to a usable value, so it has to be special cased.
if v == math.MinInt64 {
return "-9,223,372,036,854,775,808"
}
if v < 0 {
sign = "-"
v = 0 - v
}
parts := []string{"", "", "", "", "", "", ""}
j := len(parts) - 1
for v > 999 {
parts[j] = strconv.FormatInt(v%1000, 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
v = v / 1000
j--
}
parts[j] = strconv.Itoa(int(v))
return sign + strings.Join(parts[j:], ",")
}
// Commaf produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Commaf(834142.32) -> 834,142.32
func Commaf(v float64) string {
buf := &bytes.Buffer{}
if v < 0 {
buf.Write([]byte{'-'})
v = 0 - v
}
comma := []byte{','}
parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}
// CommafWithDigits works like the Commaf but limits the resulting
// string to the given number of decimal places.
//
// e.g. CommafWithDigits(834142.32, 1) -> 834,142.3
func CommafWithDigits(f float64, decimals int) string {
return stripTrailingDigits(Commaf(f), decimals)
}
// BigComma produces a string form of the given big.Int in base 10
// with commas after every three orders of magnitude.
func BigComma(b *big.Int) string {
sign := ""
if b.Sign() < 0 {
sign = "-"
b.Abs(b)
}
athousand := big.NewInt(1000)
c := (&big.Int{}).Set(b)
_, m := oom(c, athousand)
parts := make([]string, m+1)
j := len(parts) - 1
mod := &big.Int{}
for b.Cmp(athousand) >= 0 {
b.DivMod(b, athousand, mod)
parts[j] = strconv.FormatInt(mod.Int64(), 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
j--
}
parts[j] = strconv.Itoa(int(b.Int64()))
return sign + strings.Join(parts[j:], ",")
}

40
vendor/github.com/dustin/go-humanize/commaf.go generated vendored Normal file
View File

@ -0,0 +1,40 @@
// +build go1.6
package humanize
import (
"bytes"
"math/big"
"strings"
)
// BigCommaf produces a string form of the given big.Float in base 10
// with commas after every three orders of magnitude.
func BigCommaf(v *big.Float) string {
buf := &bytes.Buffer{}
if v.Sign() < 0 {
buf.Write([]byte{'-'})
v.Abs(v)
}
comma := []byte{','}
parts := strings.Split(v.Text('f', -1), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}

46
vendor/github.com/dustin/go-humanize/ftoa.go generated vendored Normal file
View File

@ -0,0 +1,46 @@
package humanize
import (
"strconv"
"strings"
)
func stripTrailingZeros(s string) string {
offset := len(s) - 1
for offset > 0 {
if s[offset] == '.' {
offset--
break
}
if s[offset] != '0' {
break
}
offset--
}
return s[:offset+1]
}
func stripTrailingDigits(s string, digits int) string {
if i := strings.Index(s, "."); i >= 0 {
if digits <= 0 {
return s[:i]
}
i++
if i+digits >= len(s) {
return s
}
return s[:i+digits]
}
return s
}
// Ftoa converts a float to a string with no trailing zeros.
func Ftoa(num float64) string {
return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64))
}
// FtoaWithDigits converts a float to a string but limits the resulting string
// to the given number of decimal places, and no trailing zeros.
func FtoaWithDigits(num float64, digits int) string {
return stripTrailingZeros(stripTrailingDigits(strconv.FormatFloat(num, 'f', 6, 64), digits))
}

8
vendor/github.com/dustin/go-humanize/humanize.go generated vendored Normal file
View File

@ -0,0 +1,8 @@
/*
Package humanize converts boring ugly numbers to human-friendly strings and back.
Durations can be turned into strings such as "3 days ago", numbers
representing sizes like 82854982 into useful strings like, "83 MB" or
"79 MiB" (whichever you prefer).
*/
package humanize

192
vendor/github.com/dustin/go-humanize/number.go generated vendored Normal file
View File

@ -0,0 +1,192 @@
package humanize
/*
Slightly adapted from the source to fit go-humanize.
Author: https://github.com/gorhill
Source: https://gist.github.com/gorhill/5285193
*/
import (
"math"
"strconv"
)
var (
renderFloatPrecisionMultipliers = [...]float64{
1,
10,
100,
1000,
10000,
100000,
1000000,
10000000,
100000000,
1000000000,
}
renderFloatPrecisionRounders = [...]float64{
0.5,
0.05,
0.005,
0.0005,
0.00005,
0.000005,
0.0000005,
0.00000005,
0.000000005,
0.0000000005,
}
)
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
// * thousands separator
// * decimal separator
// * decimal precision
//
// Usage: s := RenderFloat(format, n)
// The format parameter tells how to render the number n.
//
// See examples: http://play.golang.org/p/LXc1Ddm1lJ
//
// Examples of format strings, given n = 12345.6789:
// "#,###.##" => "12,345.67"
// "#,###." => "12,345"
// "#,###" => "12345,678"
// "#\u202F###,##" => "12345,68"
// "#.###,###### => 12.345,678900
// "" (aka default format) => 12,345.67
//
// The highest precision allowed is 9 digits after the decimal symbol.
// There is also a version for integer number, FormatInteger(),
// which is convenient for calls within template.
func FormatFloat(format string, n float64) string {
// Special cases:
// NaN = "NaN"
// +Inf = "+Infinity"
// -Inf = "-Infinity"
if math.IsNaN(n) {
return "NaN"
}
if n > math.MaxFloat64 {
return "Infinity"
}
if n < -math.MaxFloat64 {
return "-Infinity"
}
// default format
precision := 2
decimalStr := "."
thousandStr := ","
positiveStr := ""
negativeStr := "-"
if len(format) > 0 {
format := []rune(format)
// If there is an explicit format directive,
// then default values are these:
precision = 9
thousandStr = ""
// collect indices of meaningful formatting directives
formatIndx := []int{}
for i, char := range format {
if char != '#' && char != '0' {
formatIndx = append(formatIndx, i)
}
}
if len(formatIndx) > 0 {
// Directive at index 0:
// Must be a '+'
// Raise an error if not the case
// index: 0123456789
// +0.000,000
// +000,000.0
// +0000.00
// +0000
if formatIndx[0] == 0 {
if format[formatIndx[0]] != '+' {
panic("RenderFloat(): invalid positive sign directive")
}
positiveStr = "+"
formatIndx = formatIndx[1:]
}
// Two directives:
// First is thousands separator
// Raise an error if not followed by 3-digit
// 0123456789
// 0.000,000
// 000,000.00
if len(formatIndx) == 2 {
if (formatIndx[1] - formatIndx[0]) != 4 {
panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers")
}
thousandStr = string(format[formatIndx[0]])
formatIndx = formatIndx[1:]
}
// One directive:
// Directive is decimal separator
// The number of digit-specifier following the separator indicates wanted precision
// 0123456789
// 0.00
// 000,0000
if len(formatIndx) == 1 {
decimalStr = string(format[formatIndx[0]])
precision = len(format) - formatIndx[0] - 1
}
}
}
// generate sign part
var signStr string
if n >= 0.000000001 {
signStr = positiveStr
} else if n <= -0.000000001 {
signStr = negativeStr
n = -n
} else {
signStr = ""
n = 0.0
}
// split number into integer and fractional parts
intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision])
// generate integer part string
intStr := strconv.FormatInt(int64(intf), 10)
// add thousand separator if required
if len(thousandStr) > 0 {
for i := len(intStr); i > 3; {
i -= 3
intStr = intStr[:i] + thousandStr + intStr[i:]
}
}
// no fractional part, we can leave now
if precision == 0 {
return signStr + intStr
}
// generate fractional part
fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision]))
// may need padding
if len(fracStr) < precision {
fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr
}
return signStr + intStr + decimalStr + fracStr
}
// FormatInteger produces a formatted number as string.
// See FormatFloat.
func FormatInteger(format string, n int) string {
return FormatFloat(format, float64(n))
}

25
vendor/github.com/dustin/go-humanize/ordinals.go generated vendored Normal file
View File

@ -0,0 +1,25 @@
package humanize
import "strconv"
// Ordinal gives you the input number in a rank/ordinal format.
//
// Ordinal(3) -> 3rd
func Ordinal(x int) string {
suffix := "th"
switch x % 10 {
case 1:
if x%100 != 11 {
suffix = "st"
}
case 2:
if x%100 != 12 {
suffix = "nd"
}
case 3:
if x%100 != 13 {
suffix = "rd"
}
}
return strconv.Itoa(x) + suffix
}

123
vendor/github.com/dustin/go-humanize/si.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
package humanize
import (
"errors"
"math"
"regexp"
"strconv"
)
var siPrefixTable = map[float64]string{
-24: "y", // yocto
-21: "z", // zepto
-18: "a", // atto
-15: "f", // femto
-12: "p", // pico
-9: "n", // nano
-6: "µ", // micro
-3: "m", // milli
0: "",
3: "k", // kilo
6: "M", // mega
9: "G", // giga
12: "T", // tera
15: "P", // peta
18: "E", // exa
21: "Z", // zetta
24: "Y", // yotta
}
var revSIPrefixTable = revfmap(siPrefixTable)
// revfmap reverses the map and precomputes the power multiplier
func revfmap(in map[float64]string) map[string]float64 {
rv := map[string]float64{}
for k, v := range in {
rv[v] = math.Pow(10, k)
}
return rv
}
var riParseRegex *regexp.Regexp
func init() {
ri := `^([\-0-9.]+)\s?([`
for _, v := range siPrefixTable {
ri += v
}
ri += `]?)(.*)`
riParseRegex = regexp.MustCompile(ri)
}
// ComputeSI finds the most appropriate SI prefix for the given number
// and returns the prefix along with the value adjusted to be within
// that prefix.
//
// See also: SI, ParseSI.
//
// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p")
func ComputeSI(input float64) (float64, string) {
if input == 0 {
return 0, ""
}
mag := math.Abs(input)
exponent := math.Floor(logn(mag, 10))
exponent = math.Floor(exponent/3) * 3
value := mag / math.Pow(10, exponent)
// Handle special case where value is exactly 1000.0
// Should return 1 M instead of 1000 k
if value == 1000.0 {
exponent += 3
value = mag / math.Pow(10, exponent)
}
value = math.Copysign(value, input)
prefix := siPrefixTable[exponent]
return value, prefix
}
// SI returns a string with default formatting.
//
// SI uses Ftoa to format float value, removing trailing zeros.
//
// See also: ComputeSI, ParseSI.
//
// e.g. SI(1000000, "B") -> 1 MB
// e.g. SI(2.2345e-12, "F") -> 2.2345 pF
func SI(input float64, unit string) string {
value, prefix := ComputeSI(input)
return Ftoa(value) + " " + prefix + unit
}
// SIWithDigits works like SI but limits the resulting string to the
// given number of decimal places.
//
// e.g. SIWithDigits(1000000, 0, "B") -> 1 MB
// e.g. SIWithDigits(2.2345e-12, 2, "F") -> 2.23 pF
func SIWithDigits(input float64, decimals int, unit string) string {
value, prefix := ComputeSI(input)
return FtoaWithDigits(value, decimals) + " " + prefix + unit
}
var errInvalid = errors.New("invalid input")
// ParseSI parses an SI string back into the number and unit.
//
// See also: SI, ComputeSI.
//
// e.g. ParseSI("2.2345 pF") -> (2.2345e-12, "F", nil)
func ParseSI(input string) (float64, string, error) {
found := riParseRegex.FindStringSubmatch(input)
if len(found) != 4 {
return 0, "", errInvalid
}
mag := revSIPrefixTable[found[2]]
unit := found[3]
base, err := strconv.ParseFloat(found[1], 64)
return base * mag, unit, err
}

117
vendor/github.com/dustin/go-humanize/times.go generated vendored Normal file
View File

@ -0,0 +1,117 @@
package humanize
import (
"fmt"
"math"
"sort"
"time"
)
// Seconds-based time units
const (
Day = 24 * time.Hour
Week = 7 * Day
Month = 30 * Day
Year = 12 * Month
LongTime = 37 * Year
)
// Time formats a time into a relative string.
//
// Time(someT) -> "3 weeks ago"
func Time(then time.Time) string {
return RelTime(then, time.Now(), "ago", "from now")
}
// A RelTimeMagnitude struct contains a relative time point at which
// the relative format of time will switch to a new format string. A
// slice of these in ascending order by their "D" field is passed to
// CustomRelTime to format durations.
//
// The Format field is a string that may contain a "%s" which will be
// replaced with the appropriate signed label (e.g. "ago" or "from
// now") and a "%d" that will be replaced by the quantity.
//
// The DivBy field is the amount of time the time difference must be
// divided by in order to display correctly.
//
// e.g. if D is 2*time.Minute and you want to display "%d minutes %s"
// DivBy should be time.Minute so whatever the duration is will be
// expressed in minutes.
type RelTimeMagnitude struct {
D time.Duration
Format string
DivBy time.Duration
}
var defaultMagnitudes = []RelTimeMagnitude{
{time.Second, "now", time.Second},
{2 * time.Second, "1 second %s", 1},
{time.Minute, "%d seconds %s", time.Second},
{2 * time.Minute, "1 minute %s", 1},
{time.Hour, "%d minutes %s", time.Minute},
{2 * time.Hour, "1 hour %s", 1},
{Day, "%d hours %s", time.Hour},
{2 * Day, "1 day %s", 1},
{Week, "%d days %s", Day},
{2 * Week, "1 week %s", 1},
{Month, "%d weeks %s", Week},
{2 * Month, "1 month %s", 1},
{Year, "%d months %s", Month},
{18 * Month, "1 year %s", 1},
{2 * Year, "2 years %s", 1},
{LongTime, "%d years %s", Year},
{math.MaxInt64, "a long while %s", 1},
}
// RelTime formats a time into a relative string.
//
// It takes two times and two labels. In addition to the generic time
// delta string (e.g. 5 minutes), the labels are used applied so that
// the label corresponding to the smaller time is applied.
//
// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier"
func RelTime(a, b time.Time, albl, blbl string) string {
return CustomRelTime(a, b, albl, blbl, defaultMagnitudes)
}
// CustomRelTime formats a time into a relative string.
//
// It takes two times two labels and a table of relative time formats.
// In addition to the generic time delta string (e.g. 5 minutes), the
// labels are used applied so that the label corresponding to the
// smaller time is applied.
func CustomRelTime(a, b time.Time, albl, blbl string, magnitudes []RelTimeMagnitude) string {
lbl := albl
diff := b.Sub(a)
if a.After(b) {
lbl = blbl
diff = a.Sub(b)
}
n := sort.Search(len(magnitudes), func(i int) bool {
return magnitudes[i].D > diff
})
if n >= len(magnitudes) {
n = len(magnitudes) - 1
}
mag := magnitudes[n]
args := []interface{}{}
escaped := false
for _, ch := range mag.Format {
if escaped {
switch ch {
case 's':
args = append(args, lbl)
case 'd':
args = append(args, diff/mag.DivBy)
}
escaped = false
} else {
escaped = ch == '%'
}
}
return fmt.Sprintf(mag.Format, args...)
}

304
vendor/github.com/klauspost/compress/LICENSE generated vendored Normal file
View File

@ -0,0 +1,304 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Copyright (c) 2019 Klaus Post. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
------------------
Files: gzhttp/*
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016-2017 The New York Times Company
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
------------------
Files: s2/cmd/internal/readahead/*
The MIT License (MIT)
Copyright (c) 2015 Klaus Post
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---------------------
Files: snappy/*
Files: internal/snapref/*
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-----------------
Files: s2/cmd/internal/filepathx/*
Copyright 2016 The filepathx Authors
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

15
vendor/github.com/klauspost/compress/s2/.gitignore generated vendored Normal file
View File

@ -0,0 +1,15 @@
testdata/bench
# These explicitly listed benchmark data files are for an obsolete version of
# snappy_test.go.
testdata/alice29.txt
testdata/asyoulik.txt
testdata/fireworks.jpeg
testdata/geo.protodata
testdata/html
testdata/html_x_4
testdata/kppkn.gtb
testdata/lcet10.txt
testdata/paper-100k.pdf
testdata/plrabn12.txt
testdata/urls.10K

28
vendor/github.com/klauspost/compress/s2/LICENSE generated vendored Normal file
View File

@ -0,0 +1,28 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Copyright (c) 2019 Klaus Post. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

717
vendor/github.com/klauspost/compress/s2/README.md generated vendored Normal file
View File

@ -0,0 +1,717 @@
# S2 Compression
S2 is an extension of [Snappy](https://github.com/google/snappy).
S2 is aimed for high throughput, which is why it features concurrent compression for bigger payloads.
Decoding is compatible with Snappy compressed content, but content compressed with S2 cannot be decompressed by Snappy.
This means that S2 can seamlessly replace Snappy without converting compressed content.
S2 can produce Snappy compatible output, faster and better than Snappy.
If you want full benefit of the changes you should use s2 without Snappy compatibility.
S2 is designed to have high throughput on content that cannot be compressed.
This is important, so you don't have to worry about spending CPU cycles on already compressed data.
## Benefits over Snappy
* Better compression
* Adjustable compression (3 levels)
* Concurrent stream compression
* Faster decompression, even for Snappy compatible content
* Ability to quickly skip forward in compressed stream
* Compatible with reading Snappy compressed content
* Smaller block size overhead on incompressible blocks
* Block concatenation
* Uncompressed stream mode
* Automatic stream size padding
* Snappy compatible block compression
## Drawbacks over Snappy
* Not optimized for 32 bit systems.
* Streams use slightly more memory due to larger blocks and concurrency (configurable).
# Usage
Installation: `go get -u github.com/klauspost/compress/s2`
Full package documentation:
[![godoc][1]][2]
[1]: https://godoc.org/github.com/klauspost/compress?status.svg
[2]: https://godoc.org/github.com/klauspost/compress/s2
## Compression
```Go
func EncodeStream(src io.Reader, dst io.Writer) error {
enc := s2.NewWriter(dst)
_, err := io.Copy(enc, src)
if err != nil {
enc.Close()
return err
}
// Blocks until compression is done.
return enc.Close()
}
```
You should always call `enc.Close()`, otherwise you will leak resources and your encode will be incomplete.
For the best throughput, you should attempt to reuse the `Writer` using the `Reset()` method.
The Writer in S2 is always buffered, therefore `NewBufferedWriter` in Snappy can be replaced with `NewWriter` in S2.
It is possible to flush any buffered data using the `Flush()` method.
This will block until all data sent to the encoder has been written to the output.
S2 also supports the `io.ReaderFrom` interface, which will consume all input from a reader.
As a final method to compress data, if you have a single block of data you would like to have encoded as a stream,
a slightly more efficient method is to use the `EncodeBuffer` method.
This will take ownership of the buffer until the stream is closed.
```Go
func EncodeStream(src []byte, dst io.Writer) error {
enc := s2.NewWriter(dst)
// The encoder owns the buffer until Flush or Close is called.
err := enc.EncodeBuffer(buf)
if err != nil {
enc.Close()
return err
}
// Blocks until compression is done.
return enc.Close()
}
```
Each call to `EncodeBuffer` will result in discrete blocks being created without buffering,
so it should only be used a single time per stream.
If you need to write several blocks, you should use the regular io.Writer interface.
## Decompression
```Go
func DecodeStream(src io.Reader, dst io.Writer) error {
dec := s2.NewReader(src)
_, err := io.Copy(dst, dec)
return err
}
```
Similar to the Writer, a Reader can be reused using the `Reset` method.
For the best possible throughput, there is a `EncodeBuffer(buf []byte)` function available.
However, it requires that the provided buffer isn't used after it is handed over to S2 and until the stream is flushed or closed.
For smaller data blocks, there is also a non-streaming interface: `Encode()`, `EncodeBetter()` and `Decode()`.
Do however note that these functions (similar to Snappy) does not provide validation of data,
so data corruption may be undetected. Stream encoding provides CRC checks of data.
It is possible to efficiently skip forward in a compressed stream using the `Skip()` method.
For big skips the decompressor is able to skip blocks without decompressing them.
## Single Blocks
Similar to Snappy S2 offers single block compression.
Blocks do not offer the same flexibility and safety as streams,
but may be preferable for very small payloads, less than 100K.
Using a simple `dst := s2.Encode(nil, src)` will compress `src` and return the compressed result.
It is possible to provide a destination buffer.
If the buffer has a capacity of `s2.MaxEncodedLen(len(src))` it will be used.
If not a new will be allocated.
Alternatively `EncodeBetter`/`EncodeBest` can also be used for better, but slightly slower compression.
Similarly to decompress a block you can use `dst, err := s2.Decode(nil, src)`.
Again an optional destination buffer can be supplied.
The `s2.DecodedLen(src)` can be used to get the minimum capacity needed.
If that is not satisfied a new buffer will be allocated.
Block function always operate on a single goroutine since it should only be used for small payloads.
# Commandline tools
Some very simply commandline tools are provided; `s2c` for compression and `s2d` for decompression.
Binaries can be downloaded on the [Releases Page](https://github.com/klauspost/compress/releases).
Installing then requires Go to be installed. To install them, use:
`go install github.com/klauspost/compress/s2/cmd/s2c && go install github.com/klauspost/compress/s2/cmd/s2d`
To build binaries to the current folder use:
`go build github.com/klauspost/compress/s2/cmd/s2c && go build github.com/klauspost/compress/s2/cmd/s2d`
## s2c
```
Usage: s2c [options] file1 file2
Compresses all files supplied as input separately.
Output files are written as 'filename.ext.s2' or 'filename.ext.snappy'.
By default output files will be overwritten.
Use - as the only file name to read from stdin and write to stdout.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
File names beginning with 'http://' and 'https://' will be downloaded and compressed.
Only http response code 200 is accepted.
Options:
-bench int
Run benchmark n times. No output will be written
-blocksize string
Max block size. Examples: 64K, 256K, 1M, 4M. Must be power of two and <= 4MB (default "4M")
-c Write all output to stdout. Multiple input files will be concatenated
-cpu int
Compress using this amount of threads (default 32)
-faster
Compress faster, but with a minor compression loss
-help
Display help
-o string
Write output to another file. Single input file only
-pad string
Pad size to a multiple of this value, Examples: 500, 64K, 256K, 1M, 4M, etc (default "1")
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful compression
-safe
Do not overwrite output files
-slower
Compress more, but a lot slower
-snappy
Generate Snappy compatible output stream
-verify
Verify written files
```
## s2d
```
Usage: s2d [options] file1 file2
Decompresses all files supplied as input. Input files must end with '.s2' or '.snappy'.
Output file names have the extension removed. By default output files will be overwritten.
Use - as the only file name to read from stdin and write to stdout.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
File names beginning with 'http://' and 'https://' will be downloaded and decompressed.
Extensions on downloaded files are ignored. Only http response code 200 is accepted.
Options:
-bench int
Run benchmark n times. No output will be written
-c Write all output to stdout. Multiple input files will be concatenated
-help
Display help
-o string
Write output to another file. Single input file only
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful decompression
-safe
Do not overwrite output files
-verify
Verify files, but do not write output
```
## s2sx: self-extracting archives
s2sx allows creating self-extracting archives with no dependencies.
By default, executables are created for the same platforms as the host os,
but this can be overridden with `-os` and `-arch` parameters.
Extracted files have 0666 permissions, except when untar option used.
```
Usage: s2sx [options] file1 file2
Compresses all files supplied as input separately.
If files have '.s2' extension they are assumed to be compressed already.
Output files are written as 'filename.s2sx' and with '.exe' for windows targets.
If output is big, an additional file with ".more" is written. This must be included as well.
By default output files will be overwritten.
Wildcards are accepted: testdir/*.txt will compress all files in testdir ending with .txt
Directories can be wildcards as well. testdir/*/*.txt will match testdir/subdir/b.txt
Options:
-arch string
Destination architecture (default "amd64")
-c Write all output to stdout. Multiple input files will be concatenated
-cpu int
Compress using this amount of threads (default 32)
-help
Display help
-max string
Maximum executable size. Rest will be written to another file. (default "1G")
-os string
Destination operating system (default "windows")
-q Don't write any output to terminal, except errors
-rm
Delete source file(s) after successful compression
-safe
Do not overwrite output files
-untar
Untar on destination
```
Available platforms are:
* darwin-amd64
* darwin-arm64
* linux-amd64
* linux-arm
* linux-arm64
* linux-mips64
* linux-ppc64le
* windows-386
* windows-amd64
By default, there is a size limit of 1GB for the output executable.
When this is exceeded the remaining file content is written to a file called
output+`.more`. This file must be included for a successful extraction and
placed alongside the executable for a successful extraction.
This file *must* have the same name as the executable, so if the executable is renamed,
so must the `.more` file.
This functionality is disabled with stdin/stdout.
### Self-extracting TAR files
If you wrap a TAR file you can specify `-untar` to make it untar on the destination host.
Files are extracted to the current folder with the path specified in the tar file.
Note that tar files are not validated before they are wrapped.
For security reasons files that move below the root folder are not allowed.
# Performance
This section will focus on comparisons to Snappy.
This package is solely aimed at replacing Snappy as a high speed compression package.
If you are mainly looking for better compression [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd)
gives better compression, but typically at speeds slightly below "better" mode in this package.
Compression is increased compared to Snappy, mostly around 5-20% and the throughput is typically 25-40% increased (single threaded) compared to the Snappy Go implementation.
Streams are concurrently compressed. The stream will be distributed among all available CPU cores for the best possible throughput.
A "better" compression mode is also available. This allows to trade a bit of speed for a minor compression gain.
The content compressed in this mode is fully compatible with the standard decoder.
Snappy vs S2 **compression** speed on 16 core (32 thread) computer, using all threads and a single thread (1 CPU):
| File | S2 speed | S2 Throughput | S2 % smaller | S2 "better" | "better" throughput | "better" % smaller |
|-----------------------------------------------------------------------------------------------------|----------|---------------|--------------|-------------|---------------------|--------------------|
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 12.70x | 10556 MB/s | 7.35% | 4.15x | 3455 MB/s | 12.79% |
| (1 CPU) | 1.14x | 948 MB/s | - | 0.42x | 349 MB/s | - |
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 17.13x | 14484 MB/s | 31.60% | 10.09x | 8533 MB/s | 37.71% |
| (1 CPU) | 1.33x | 1127 MB/s | - | 0.70x | 589 MB/s | - |
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 15.14x | 12000 MB/s | -5.79% | 6.59x | 5223 MB/s | 5.80% |
| (1 CPU) | 1.11x | 877 MB/s | - | 0.47x | 370 MB/s | - |
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 14.62x | 12116 MB/s | 15.90% | 5.35x | 4430 MB/s | 16.08% |
| (1 CPU) | 1.38x | 1146 MB/s | - | 0.38x | 312 MB/s | - |
| [adresser.json](https://files.klauspost.com/compress/adresser.json.zst) | 8.83x | 17579 MB/s | 43.86% | 6.54x | 13011 MB/s | 47.23% |
| (1 CPU) | 1.14x | 2259 MB/s | - | 0.74x | 1475 MB/s | - |
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 16.72x | 14019 MB/s | 24.02% | 10.11x | 8477 MB/s | 30.48% |
| (1 CPU) | 1.24x | 1043 MB/s | - | 0.70x | 586 MB/s | - |
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 13.33x | 9254 MB/s | 1.84% | 6.75x | 4686 MB/s | 6.72% |
| (1 CPU) | 0.97x | 672 MB/s | - | 0.53x | 366 MB/s | - |
| sharnd.out.2gb | 2.11x | 12639 MB/s | 0.01% | 1.98x | 11833 MB/s | 0.01% |
| (1 CPU) | 0.93x | 5594 MB/s | - | 1.34x | 8030 MB/s | - |
| [enwik9](http://mattmahoney.net/dc/textdata.html) | 19.34x | 8220 MB/s | 3.98% | 7.87x | 3345 MB/s | 15.82% |
| (1 CPU) | 1.06x | 452 MB/s | - | 0.50x | 213 MB/s | - |
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 10.48x | 6124 MB/s | 5.67% | 3.76x | 2197 MB/s | 12.60% |
| (1 CPU) | 0.97x | 568 MB/s | - | 0.46x | 271 MB/s | - |
| [enwik10](https://encode.su/threads/3315-enwik10-benchmark-results) | 21.07x | 9020 MB/s | 6.36% | 6.91x | 2959 MB/s | 16.95% |
| (1 CPU) | 1.07x | 460 MB/s | - | 0.51x | 220 MB/s | - |
### Legend
* `S2 speed`: Speed of S2 compared to Snappy, using 16 cores and 1 core.
* `S2 throughput`: Throughput of S2 in MB/s.
* `S2 % smaller`: How many percent of the Snappy output size is S2 better.
* `S2 "better"`: Speed when enabling "better" compression mode in S2 compared to Snappy.
* `"better" throughput`: Speed when enabling "better" compression mode in S2 compared to Snappy.
* `"better" % smaller`: How many percent of the Snappy output size is S2 better when using "better" compression.
There is a good speedup across the board when using a single thread and a significant speedup when using multiple threads.
Machine generated data gets by far the biggest compression boost, with size being being reduced by up to 45% of Snappy size.
The "better" compression mode sees a good improvement in all cases, but usually at a performance cost.
Incompressible content (`sharnd.out.2gb`, 2GB random data) sees the smallest speedup.
This is likely dominated by synchronization overhead, which is confirmed by the fact that single threaded performance is higher (see above).
## Decompression
S2 attempts to create content that is also fast to decompress, except in "better" mode where the smallest representation is used.
S2 vs Snappy **decompression** speed. Both operating on single core:
| File | S2 Throughput | vs. Snappy | Better Throughput | vs. Snappy |
|-----------------------------------------------------------------------------------------------------|---------------|------------|-------------------|------------|
| [rawstudio-mint14.tar](https://files.klauspost.com/compress/rawstudio-mint14.7z) | 2117 MB/s | 1.14x | 1738 MB/s | 0.94x |
| [github-june-2days-2019.json](https://files.klauspost.com/compress/github-june-2days-2019.json.zst) | 2401 MB/s | 1.25x | 2307 MB/s | 1.20x |
| [github-ranks-backup.bin](https://files.klauspost.com/compress/github-ranks-backup.bin.zst) | 2075 MB/s | 0.98x | 1764 MB/s | 0.83x |
| [consensus.db.10gb](https://files.klauspost.com/compress/consensus.db.10gb.zst) | 2967 MB/s | 1.05x | 2885 MB/s | 1.02x |
| [adresser.json](https://files.klauspost.com/compress/adresser.json.zst) | 4141 MB/s | 1.07x | 4184 MB/s | 1.08x |
| [gob-stream](https://files.klauspost.com/compress/gob-stream.7z) | 2264 MB/s | 1.12x | 2185 MB/s | 1.08x |
| [10gb.tar](http://mattmahoney.net/dc/10gb.html) | 1525 MB/s | 1.03x | 1347 MB/s | 0.91x |
| sharnd.out.2gb | 3813 MB/s | 0.79x | 3900 MB/s | 0.81x |
| [enwik9](http://mattmahoney.net/dc/textdata.html) | 1246 MB/s | 1.29x | 967 MB/s | 1.00x |
| [silesia.tar](http://sun.aei.polsl.pl/~sdeor/corpus/silesia.zip) | 1433 MB/s | 1.12x | 1203 MB/s | 0.94x |
| [enwik10](https://encode.su/threads/3315-enwik10-benchmark-results) | 1284 MB/s | 1.32x | 1010 MB/s | 1.04x |
### Legend
* `S2 Throughput`: Decompression speed of S2 encoded content.
* `Better Throughput`: Decompression speed of S2 "better" encoded content.
* `vs Snappy`: Decompression speed of S2 "better" mode compared to Snappy and absolute speed.
While the decompression code hasn't changed, there is a significant speedup in decompression speed.
S2 prefers longer matches and will typically only find matches that are 6 bytes or longer.
While this reduces compression a bit, it improves decompression speed.
The "better" compression mode will actively look for shorter matches, which is why it has a decompression speed quite similar to Snappy.
Without assembly decompression is also very fast; single goroutine decompression speed. No assembly:
| File | S2 Throughput | S2 throughput |
|--------------------------------|--------------|---------------|
| consensus.db.10gb.s2 | 1.84x | 2289.8 MB/s |
| 10gb.tar.s2 | 1.30x | 867.07 MB/s |
| rawstudio-mint14.tar.s2 | 1.66x | 1329.65 MB/s |
| github-june-2days-2019.json.s2 | 2.36x | 1831.59 MB/s |
| github-ranks-backup.bin.s2 | 1.73x | 1390.7 MB/s |
| enwik9.s2 | 1.67x | 681.53 MB/s |
| adresser.json.s2 | 3.41x | 4230.53 MB/s |
| silesia.tar.s2 | 1.52x | 811.58 |
Even though S2 typically compresses better than Snappy, decompression speed is always better.
## Block compression
When compressing blocks no concurrent compression is performed just as Snappy.
This is because blocks are for smaller payloads and generally will not benefit from concurrent compression.
An important change is that incompressible blocks will not be more than at most 10 bytes bigger than the input.
In rare, worst case scenario Snappy blocks could be significantly bigger than the input.
### Mixed content blocks
The most reliable is a wide dataset.
For this we use [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
53927 files, total input size: 4,014,735,833 bytes. Single goroutine used.
| * | Input | Output | Reduction | MB/s |
|-------------------|------------|------------|-----------|--------|
| S2 | 4014735833 | 1059723369 | 73.60% | **934.34** |
| S2 Better | 4014735833 | 969670507 | 75.85% | 532.70 |
| S2 Best | 4014735833 | 906625668 | **77.85%** | 46.84 |
| Snappy | 4014735833 | 1128706759 | 71.89% | 762.59 |
| S2, Snappy Output | 4014735833 | 1093821420 | 72.75% | 908.60 |
| LZ4 | 4014735833 | 1079259294 | 73.12% | 526.94 |
S2 delivers both the best single threaded throughput with regular mode and the best compression rate with "best".
"Better" mode provides the same compression speed as LZ4 with better compression ratio.
When outputting Snappy compatible output it still delivers better throughput (150MB/s more) and better compression.
As can be seen from the other benchmarks decompression should also be easier on the S2 generated output.
Though they cannot be compared due to different decompression speeds here are the speed/size comparisons for
other Go compressors:
| * | Input | Output | Reduction | MB/s |
|-------------------|------------|------------|-----------|--------|
| Zstd Fastest (Go) | 4014735833 | 794608518 | 80.21% | 236.04 |
| Zstd Best (Go) | 4014735833 | 704603356 | 82.45% | 35.63 |
| Deflate (Go) l1 | 4014735833 | 871294239 | 78.30% | 214.04 |
| Deflate (Go) l9 | 4014735833 | 730389060 | 81.81% | 41.17 |
### Standard block compression
Benchmarking single block performance is subject to a lot more variation since it only tests a limited number of file patterns.
So individual benchmarks should only be seen as a guideline and the overall picture is more important.
These micro-benchmarks are with data in cache and trained branch predictors. For a more realistic benchmark see the mixed content above.
Block compression. Parallel benchmark running on 16 cores, 16 goroutines.
AMD64 assembly is use for both S2 and Snappy.
| Absolute Perf | Snappy size | S2 Size | Snappy Speed | S2 Speed | Snappy dec | S2 dec |
|-----------------------|-------------|---------|--------------|-------------|-------------|-------------|
| html | 22843 | 21111 | 16246 MB/s | 17438 MB/s | 40972 MB/s | 49263 MB/s |
| urls.10K | 335492 | 287326 | 7943 MB/s | 9693 MB/s | 22523 MB/s | 26484 MB/s |
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 273889 MB/s | 718321 MB/s | 827552 MB/s |
| fireworks.jpeg (200B) | 146 | 155 | 8869 MB/s | 17773 MB/s | 33691 MB/s | 52421 MB/s |
| paper-100k.pdf | 85304 | 84459 | 167546 MB/s | 101263 MB/s | 326905 MB/s | 291944 MB/s |
| html_x_4 | 92234 | 21113 | 15194 MB/s | 50670 MB/s | 30843 MB/s | 32217 MB/s |
| alice29.txt | 88034 | 85975 | 5936 MB/s | 6139 MB/s | 12882 MB/s | 20044 MB/s |
| asyoulik.txt | 77503 | 79650 | 5517 MB/s | 6366 MB/s | 12735 MB/s | 22806 MB/s |
| lcet10.txt | 234661 | 220670 | 6235 MB/s | 6067 MB/s | 14519 MB/s | 18697 MB/s |
| plrabn12.txt | 319267 | 317985 | 5159 MB/s | 5726 MB/s | 11923 MB/s | 19901 MB/s |
| geo.protodata | 23335 | 18690 | 21220 MB/s | 26529 MB/s | 56271 MB/s | 62540 MB/s |
| kppkn.gtb | 69526 | 65312 | 9732 MB/s | 8559 MB/s | 18491 MB/s | 18969 MB/s |
| alice29.txt (128B) | 80 | 82 | 6691 MB/s | 15489 MB/s | 31883 MB/s | 38874 MB/s |
| alice29.txt (1000B) | 774 | 774 | 12204 MB/s | 13000 MB/s | 48056 MB/s | 52341 MB/s |
| alice29.txt (10000B) | 6648 | 6933 | 10044 MB/s | 12806 MB/s | 32378 MB/s | 46322 MB/s |
| alice29.txt (20000B) | 12686 | 13574 | 7733 MB/s | 11210 MB/s | 30566 MB/s | 58969 MB/s |
| Relative Perf | Snappy size | S2 size improved | S2 Speed | S2 Dec Speed |
|-----------------------|-------------|------------------|----------|--------------|
| html | 22.31% | 7.58% | 1.07x | 1.20x |
| urls.10K | 47.78% | 14.36% | 1.22x | 1.18x |
| fireworks.jpeg | 99.95% | -0.05% | 0.78x | 1.15x |
| fireworks.jpeg (200B) | 73.00% | -6.16% | 2.00x | 1.56x |
| paper-100k.pdf | 83.30% | 0.99% | 0.60x | 0.89x |
| html_x_4 | 22.52% | 77.11% | 3.33x | 1.04x |
| alice29.txt | 57.88% | 2.34% | 1.03x | 1.56x |
| asyoulik.txt | 61.91% | -2.77% | 1.15x | 1.79x |
| lcet10.txt | 54.99% | 5.96% | 0.97x | 1.29x |
| plrabn12.txt | 66.26% | 0.40% | 1.11x | 1.67x |
| geo.protodata | 19.68% | 19.91% | 1.25x | 1.11x |
| kppkn.gtb | 37.72% | 6.06% | 0.88x | 1.03x |
| alice29.txt (128B) | 62.50% | -2.50% | 2.31x | 1.22x |
| alice29.txt (1000B) | 77.40% | 0.00% | 1.07x | 1.09x |
| alice29.txt (10000B) | 66.48% | -4.29% | 1.27x | 1.43x |
| alice29.txt (20000B) | 63.43% | -7.00% | 1.45x | 1.93x |
Speed is generally at or above Snappy. Small blocks gets a significant speedup, although at the expense of size.
Decompression speed is better than Snappy, except in one case.
Since payloads are very small the variance in terms of size is rather big, so they should only be seen as a general guideline.
Size is on average around Snappy, but varies on content type.
In cases where compression is worse, it usually is compensated by a speed boost.
### Better compression
Benchmarking single block performance is subject to a lot more variation since it only tests a limited number of file patterns.
So individual benchmarks should only be seen as a guideline and the overall picture is more important.
| Absolute Perf | Snappy size | Better Size | Snappy Speed | Better Speed | Snappy dec | Better dec |
|-----------------------|-------------|-------------|--------------|--------------|-------------|-------------|
| html | 22843 | 19833 | 16246 MB/s | 7731 MB/s | 40972 MB/s | 40292 MB/s |
| urls.10K | 335492 | 253529 | 7943 MB/s | 3980 MB/s | 22523 MB/s | 20981 MB/s |
| fireworks.jpeg | 123034 | 123100 | 349544 MB/s | 9760 MB/s | 718321 MB/s | 823698 MB/s |
| fireworks.jpeg (200B) | 146 | 142 | 8869 MB/s | 594 MB/s | 33691 MB/s | 30101 MB/s |
| paper-100k.pdf | 85304 | 82915 | 167546 MB/s | 7470 MB/s | 326905 MB/s | 198869 MB/s |
| html_x_4 | 92234 | 19841 | 15194 MB/s | 23403 MB/s | 30843 MB/s | 30937 MB/s |
| alice29.txt | 88034 | 73218 | 5936 MB/s | 2945 MB/s | 12882 MB/s | 16611 MB/s |
| asyoulik.txt | 77503 | 66844 | 5517 MB/s | 2739 MB/s | 12735 MB/s | 14975 MB/s |
| lcet10.txt | 234661 | 190589 | 6235 MB/s | 3099 MB/s | 14519 MB/s | 16634 MB/s |
| plrabn12.txt | 319267 | 270828 | 5159 MB/s | 2600 MB/s | 11923 MB/s | 13382 MB/s |
| geo.protodata | 23335 | 18278 | 21220 MB/s | 11208 MB/s | 56271 MB/s | 57961 MB/s |
| kppkn.gtb | 69526 | 61851 | 9732 MB/s | 4556 MB/s | 18491 MB/s | 16524 MB/s |
| alice29.txt (128B) | 80 | 81 | 6691 MB/s | 529 MB/s | 31883 MB/s | 34225 MB/s |
| alice29.txt (1000B) | 774 | 748 | 12204 MB/s | 1943 MB/s | 48056 MB/s | 42068 MB/s |
| alice29.txt (10000B) | 6648 | 6234 | 10044 MB/s | 2949 MB/s | 32378 MB/s | 28813 MB/s |
| alice29.txt (20000B) | 12686 | 11584 | 7733 MB/s | 2822 MB/s | 30566 MB/s | 27315 MB/s |
| Relative Perf | Snappy size | Better size | Better Speed | Better dec |
|-----------------------|-------------|-------------|--------------|------------|
| html | 22.31% | 13.18% | 0.48x | 0.98x |
| urls.10K | 47.78% | 24.43% | 0.50x | 0.93x |
| fireworks.jpeg | 99.95% | -0.05% | 0.03x | 1.15x |
| fireworks.jpeg (200B) | 73.00% | 2.74% | 0.07x | 0.89x |
| paper-100k.pdf | 83.30% | 2.80% | 0.07x | 0.61x |
| html_x_4 | 22.52% | 78.49% | 0.04x | 1.00x |
| alice29.txt | 57.88% | 16.83% | 1.54x | 1.29x |
| asyoulik.txt | 61.91% | 13.75% | 0.50x | 1.18x |
| lcet10.txt | 54.99% | 18.78% | 0.50x | 1.15x |
| plrabn12.txt | 66.26% | 15.17% | 0.50x | 1.12x |
| geo.protodata | 19.68% | 21.67% | 0.50x | 1.03x |
| kppkn.gtb | 37.72% | 11.04% | 0.53x | 0.89x |
| alice29.txt (128B) | 62.50% | -1.25% | 0.47x | 1.07x |
| alice29.txt (1000B) | 77.40% | 3.36% | 0.08x | 0.88x |
| alice29.txt (10000B) | 66.48% | 6.23% | 0.16x | 0.89x |
| alice29.txt (20000B) | 63.43% | 8.69% | 0.29x | 0.89x |
Except for the mostly incompressible JPEG image compression is better and usually in the
double digits in terms of percentage reduction over Snappy.
The PDF sample shows a significant slowdown compared to Snappy, as this mode tries harder
to compress the data. Very small blocks are also not favorable for better compression, so throughput is way down.
This mode aims to provide better compression at the expense of performance and achieves that
without a huge performance penalty, except on very small blocks.
Decompression speed suffers a little compared to the regular S2 mode,
but still manages to be close to Snappy in spite of increased compression.
# Best compression mode
S2 offers a "best" compression mode.
This will compress as much as possible with little regard to CPU usage.
Mainly for offline compression, but where decompression speed should still
be high and compatible with other S2 compressed data.
Some examples compared on 16 core CPU, amd64 assembly used:
```
* enwik10
Default... 10000000000 -> 4761467548 [47.61%]; 1.098s, 8685.6MB/s
Better... 10000000000 -> 4219438251 [42.19%]; 1.925s, 4954.2MB/s
Best... 10000000000 -> 3627364337 [36.27%]; 43.051s, 221.5MB/s
* github-june-2days-2019.json
Default... 6273951764 -> 1043196283 [16.63%]; 431ms, 13882.3MB/s
Better... 6273951764 -> 949146808 [15.13%]; 547ms, 10938.4MB/s
Best... 6273951764 -> 832855506 [13.27%]; 9.455s, 632.8MB/s
* nyc-taxi-data-10M.csv
Default... 3325605752 -> 1095998837 [32.96%]; 324ms, 9788.7MB/s
Better... 3325605752 -> 954776589 [28.71%]; 491ms, 6459.4MB/s
Best... 3325605752 -> 779098746 [23.43%]; 8.29s, 382.6MB/s
* 10gb.tar
Default... 10065157632 -> 5916578242 [58.78%]; 1.028s, 9337.4MB/s
Better... 10065157632 -> 5649207485 [56.13%]; 1.597s, 6010.6MB/s
Best... 10065157632 -> 5208719802 [51.75%]; 32.78s, 292.8MB/
* consensus.db.10gb
Default... 10737418240 -> 4562648848 [42.49%]; 882ms, 11610.0MB/s
Better... 10737418240 -> 4542428129 [42.30%]; 1.533s, 6679.7MB/s
Best... 10737418240 -> 4244773384 [39.53%]; 42.96s, 238.4MB/s
```
Decompression speed should be around the same as using the 'better' compression mode.
# Snappy Compatibility
S2 now offers full compatibility with Snappy.
This means that the efficient encoders of S2 can be used to generate fully Snappy compatible output.
There is a [snappy](https://github.com/klauspost/compress/tree/master/snappy) package that can be used by
simply changing imports from `github.com/golang/snappy` to `github.com/klauspost/compress/snappy`.
This uses "better" mode for all operations.
If you would like more control, you can use the s2 package as described below:
## Blocks
Snappy compatible blocks can be generated with the S2 encoder.
Compression and speed is typically a bit better `MaxEncodedLen` is also smaller for smaller memory usage. Replace
| Snappy | S2 replacement |
|----------------------------|-------------------------|
| snappy.Encode(...) | s2.EncodeSnappy(...) |
| snappy.MaxEncodedLen(...) | s2.MaxEncodedLen(...) |
`s2.EncodeSnappy` can be replaced with `s2.EncodeSnappyBetter` or `s2.EncodeSnappyBest` to get more efficiently compressed snappy compatible output.
`s2.ConcatBlocks` is compatible with snappy blocks.
Comparison of [`webdevdata.org-2015-01-07-subset`](https://files.klauspost.com/compress/webdevdata.org-2015-01-07-4GB-subset.7z),
53927 files, total input size: 4,014,735,833 bytes. amd64, single goroutine used:
| Encoder | Size | MB/s | Reduction |
|-----------------------|------------|--------|------------
| snappy.Encode | 1128706759 | 725.59 | 71.89% |
| s2.EncodeSnappy | 1093823291 | 899.16 | 72.75% |
| s2.EncodeSnappyBetter | 1001158548 | 578.49 | 75.06% |
| s2.EncodeSnappyBest | 944507998 | 66.00 | 76.47% |
## Streams
For streams, replace `enc = snappy.NewBufferedWriter(w)` with `enc = s2.NewWriter(w, s2.WriterSnappyCompat())`.
All other options are available, but note that block size limit is different for snappy.
Comparison of different streams, AMD Ryzen 3950x, 16 cores. Size and throughput:
| File | snappy.NewWriter | S2 Snappy | S2 Snappy, Better | S2 Snappy, Best |
|-----------------------------|--------------------------|---------------------------|--------------------------|-------------------------|
| nyc-taxi-data-10M.csv | 1316042016 - 517.54MB/s | 1307003093 - 8406.29MB/s | 1174534014 - 4984.35MB/s | 1115904679 - 177.81MB/s |
| enwik10 | 5088294643 - 433.45MB/s | 5175840939 - 8454.52MB/s | 4560784526 - 4403.10MB/s | 4340299103 - 159.71MB/s |
| 10gb.tar | 6056946612 - 703.25MB/s | 6208571995 - 9035.75MB/s | 5741646126 - 2402.08MB/s | 5548973895 - 171.17MB/s |
| github-june-2days-2019.json | 1525176492 - 908.11MB/s | 1476519054 - 12625.93MB/s | 1400547532 - 6163.61MB/s | 1321887137 - 200.71MB/s |
| consensus.db.10gb | 5412897703 - 1054.38MB/s | 5354073487 - 12634.82MB/s | 5335069899 - 2472.23MB/s | 5201000954 - 166.32MB/s |
# Decompression
All decompression functions map directly to equivalent s2 functions.
| Snappy | S2 replacement |
|------------------------|--------------------|
| snappy.Decode(...) | s2.Decode(...) |
| snappy.DecodedLen(...) | s2.DecodedLen(...) |
| snappy.NewReader(...) | s2.NewReader(...) |
Features like [quick forward skipping without decompression](https://pkg.go.dev/github.com/klauspost/compress/s2#Reader.Skip)
are also available for Snappy streams.
If you know you are only decompressing snappy streams, setting [`ReaderMaxBlockSize(64<<10)`](https://pkg.go.dev/github.com/klauspost/compress/s2#ReaderMaxBlockSize)
on your Reader will reduce memory consumption.
# Concatenating blocks and streams.
Concatenating streams will concatenate the output of both without recompressing them.
While this is inefficient in terms of compression it might be usable in certain scenarios.
The 10 byte 'stream identifier' of the second stream can optionally be stripped, but it is not a requirement.
Blocks can be concatenated using the `ConcatBlocks` function.
Snappy blocks/streams can safely be concatenated with S2 blocks and streams.
# Format Extensions
* Frame [Stream identifier](https://github.com/google/snappy/blob/master/framing_format.txt#L68) changed from `sNaPpY` to `S2sTwO`.
* [Framed compressed blocks](https://github.com/google/snappy/blob/master/format_description.txt) can be up to 4MB (up from 64KB).
* Compressed blocks can have an offset of `0`, which indicates to repeat the last seen offset.
Repeat offsets must be encoded as a [2.2.1. Copy with 1-byte offset (01)](https://github.com/google/snappy/blob/master/format_description.txt#L89), where the offset is 0.
The length is specified by reading the 3-bit length specified in the tag and decode using this table:
| Length | Actual Length |
|--------|----------------------|
| 0 | 4 |
| 1 | 5 |
| 2 | 6 |
| 3 | 7 |
| 4 | 8 |
| 5 | 8 + read 1 byte |
| 6 | 260 + read 2 bytes |
| 7 | 65540 + read 3 bytes |
This allows any repeat offset + length to be represented by 2 to 5 bytes.
Lengths are stored as little endian values.
The first copy of a block cannot be a repeat offset and the offset is not carried across blocks in streams.
Default streaming block size is 1MB.
# LICENSE
This code is based on the [Snappy-Go](https://github.com/golang/snappy) implementation.
Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.

565
vendor/github.com/klauspost/compress/s2/decode.go generated vendored Normal file
View File

@ -0,0 +1,565 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package s2
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt reports that the input is invalid.
ErrCorrupt = errors.New("s2: corrupt input")
// ErrCRC reports that the input failed CRC validation (streams only)
ErrCRC = errors.New("s2: corrupt input, crc mismatch")
// ErrTooLarge reports that the uncompressed length is too large.
ErrTooLarge = errors.New("s2: decoded block is too large")
// ErrUnsupported reports that the input isn't supported.
ErrUnsupported = errors.New("s2: unsupported input")
)
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
v, _, err := decodedLen(src)
return v, err
}
// decodedLen returns the length of the decoded block and the number of bytes
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n <= 0 || v > 0xffffffff {
return 0, 0, ErrCorrupt
}
const wordSize = 32 << (^uint(0) >> 32 & 1)
if wordSize == 32 && v > 0x7fffffff {
return 0, 0, ErrTooLarge
}
return int(v), n, nil
}
const (
decodeErrCodeCorrupt = 1
)
// Decode returns the decoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire decoded block.
// Otherwise, a newly allocated slice will be returned.
//
// The dst and src must not overlap. It is valid to pass a nil dst.
func Decode(dst, src []byte) ([]byte, error) {
dLen, s, err := decodedLen(src)
if err != nil {
return nil, err
}
if dLen <= cap(dst) {
dst = dst[:dLen]
} else {
dst = make([]byte, dLen)
}
if s2Decode(dst, src[s:]) != 0 {
return nil, ErrCorrupt
}
return dst, nil
}
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://github.com/google/snappy/blob/master/framing_format.txt with S2 changes.
func NewReader(r io.Reader, opts ...ReaderOption) *Reader {
nr := Reader{
r: r,
maxBlock: maxBlockSize,
}
for _, opt := range opts {
if err := opt(&nr); err != nil {
nr.err = err
return &nr
}
}
nr.maxBufSize = MaxEncodedLen(nr.maxBlock) + checksumSize
if nr.lazyBuf > 0 {
nr.buf = make([]byte, MaxEncodedLen(nr.lazyBuf)+checksumSize)
} else {
nr.buf = make([]byte, MaxEncodedLen(defaultBlockSize)+checksumSize)
}
nr.paramsOK = true
return &nr
}
// ReaderOption is an option for creating a decoder.
type ReaderOption func(*Reader) error
// ReaderMaxBlockSize allows to control allocations if the stream
// has been compressed with a smaller WriterBlockSize, or with the default 1MB.
// Blocks must be this size or smaller to decompress,
// otherwise the decoder will return ErrUnsupported.
//
// For streams compressed with Snappy this can safely be set to 64KB (64 << 10).
//
// Default is the maximum limit of 4MB.
func ReaderMaxBlockSize(blockSize int) ReaderOption {
return func(r *Reader) error {
if blockSize > maxBlockSize || blockSize <= 0 {
return errors.New("s2: block size too large. Must be <= 4MB and > 0")
}
if r.lazyBuf == 0 && blockSize < defaultBlockSize {
r.lazyBuf = blockSize
}
r.maxBlock = blockSize
return nil
}
}
// ReaderAllocBlock allows to control upfront stream allocations
// and not allocate for frames bigger than this initially.
// If frames bigger than this is seen a bigger buffer will be allocated.
//
// Default is 1MB, which is default output size.
func ReaderAllocBlock(blockSize int) ReaderOption {
return func(r *Reader) error {
if blockSize > maxBlockSize || blockSize < 1024 {
return errors.New("s2: invalid ReaderAllocBlock. Must be <= 4MB and >= 1024")
}
r.lazyBuf = blockSize
return nil
}
}
// Reader is an io.Reader that can read Snappy-compressed bytes.
type Reader struct {
r io.Reader
err error
decoded []byte
buf []byte
// decoded[i:j] contains decoded bytes that have not yet been passed on.
i, j int
// maximum block size allowed.
maxBlock int
// maximum expected buffer size.
maxBufSize int
// alloc a buffer this size if > 0.
lazyBuf int
readHeader bool
paramsOK bool
snappyFrame bool
}
// ensureBufferSize will ensure that the buffer can take at least n bytes.
// If false is returned the buffer exceeds maximum allowed size.
func (r *Reader) ensureBufferSize(n int) bool {
if len(r.buf) >= n {
return true
}
if n > r.maxBufSize {
r.err = ErrCorrupt
return false
}
// Realloc buffer.
r.buf = make([]byte, n)
return true
}
// Reset discards any buffered data, resets all state, and switches the Snappy
// reader to read from r. This permits reusing a Reader rather than allocating
// a new one.
func (r *Reader) Reset(reader io.Reader) {
if !r.paramsOK {
return
}
r.r = reader
r.err = nil
r.i = 0
r.j = 0
r.readHeader = false
}
func (r *Reader) readFull(p []byte, allowEOF bool) (ok bool) {
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
if r.err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) {
r.err = ErrCorrupt
}
return false
}
return true
}
// skipN will skip n bytes.
// If the supplied reader supports seeking that is used.
// tmp is used as a temporary buffer for reading.
// The supplied slice does not need to be the size of the read.
func (r *Reader) skipN(tmp []byte, n int, allowEOF bool) (ok bool) {
if rs, ok := r.r.(io.ReadSeeker); ok {
_, err := rs.Seek(int64(n), io.SeekCurrent)
if err == nil {
return true
}
if err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) {
r.err = ErrCorrupt
return false
}
}
for n > 0 {
if n < len(tmp) {
tmp = tmp[:n]
}
if _, r.err = io.ReadFull(r.r, tmp); r.err != nil {
if r.err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) {
r.err = ErrCorrupt
}
return false
}
n -= len(tmp)
}
return true
}
// Read satisfies the io.Reader interface.
func (r *Reader) Read(p []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
for {
if r.i < r.j {
n := copy(p, r.decoded[r.i:r.j])
r.i += n
return n, nil
}
if !r.readFull(r.buf[:4], true) {
return 0, r.err
}
chunkType := r.buf[0]
if !r.readHeader {
if chunkType != chunkTypeStreamIdentifier {
r.err = ErrCorrupt
return 0, r.err
}
r.readHeader = true
}
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
// The chunk types are specified at
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
if !r.ensureBufferSize(chunkLen) {
if r.err == nil {
r.err = ErrUnsupported
}
return 0, r.err
}
buf := r.buf[:chunkLen]
if !r.readFull(buf, false) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
buf = buf[checksumSize:]
n, err := DecodedLen(buf)
if err != nil {
r.err = err
return 0, r.err
}
if r.snappyFrame && n > maxSnappyBlockSize {
r.err = ErrCorrupt
return 0, r.err
}
if n > len(r.decoded) {
if n > r.maxBlock {
r.err = ErrCorrupt
return 0, r.err
}
r.decoded = make([]byte, n)
}
if _, err := Decode(r.decoded, buf); err != nil {
r.err = err
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCRC
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeUncompressedData:
// Section 4.3. Uncompressed data (chunk type 0x01).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
if !r.ensureBufferSize(chunkLen) {
if r.err == nil {
r.err = ErrUnsupported
}
return 0, r.err
}
buf := r.buf[:checksumSize]
if !r.readFull(buf, false) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
// Read directly into r.decoded instead of via r.buf.
n := chunkLen - checksumSize
if r.snappyFrame && n > maxSnappyBlockSize {
r.err = ErrCorrupt
return 0, r.err
}
if n > len(r.decoded) {
if n > r.maxBlock {
r.err = ErrCorrupt
return 0, r.err
}
r.decoded = make([]byte, n)
}
if !r.readFull(r.decoded[:n], false) {
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCRC
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeStreamIdentifier:
// Section 4.1. Stream identifier (chunk type 0xff).
if chunkLen != len(magicBody) {
r.err = ErrCorrupt
return 0, r.err
}
if !r.readFull(r.buf[:len(magicBody)], false) {
return 0, r.err
}
if string(r.buf[:len(magicBody)]) != magicBody {
if string(r.buf[:len(magicBody)]) != magicBodySnappy {
r.err = ErrCorrupt
return 0, r.err
} else {
r.snappyFrame = true
}
} else {
r.snappyFrame = false
}
continue
}
if chunkType <= 0x7f {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if chunkLen > maxBlockSize {
r.err = ErrUnsupported
return 0, r.err
}
if !r.skipN(r.buf, chunkLen, false) {
return 0, r.err
}
}
}
// Skip will skip n bytes forward in the decompressed output.
// For larger skips this consumes less CPU and is faster than reading output and discarding it.
// CRC is not checked on skipped blocks.
// io.ErrUnexpectedEOF is returned if the stream ends before all bytes have been skipped.
// If a decoding error is encountered subsequent calls to Read will also fail.
func (r *Reader) Skip(n int64) error {
if n < 0 {
return errors.New("attempted negative skip")
}
if r.err != nil {
return r.err
}
for n > 0 {
if r.i < r.j {
// Skip in buffer.
// decoded[i:j] contains decoded bytes that have not yet been passed on.
left := int64(r.j - r.i)
if left >= n {
r.i += int(n)
return nil
}
n -= int64(r.j - r.i)
r.i, r.j = 0, 0
}
// Buffer empty; read blocks until we have content.
if !r.readFull(r.buf[:4], true) {
if r.err == io.EOF {
r.err = io.ErrUnexpectedEOF
}
return r.err
}
chunkType := r.buf[0]
if !r.readHeader {
if chunkType != chunkTypeStreamIdentifier {
r.err = ErrCorrupt
return r.err
}
r.readHeader = true
}
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
// The chunk types are specified at
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return r.err
}
if !r.ensureBufferSize(chunkLen) {
if r.err == nil {
r.err = ErrUnsupported
}
return r.err
}
buf := r.buf[:chunkLen]
if !r.readFull(buf, false) {
return r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
buf = buf[checksumSize:]
dLen, err := DecodedLen(buf)
if err != nil {
r.err = err
return r.err
}
if dLen > r.maxBlock {
r.err = ErrCorrupt
return r.err
}
// Check if destination is within this block
if int64(dLen) > n {
if len(r.decoded) < dLen {
r.decoded = make([]byte, dLen)
}
if _, err := Decode(r.decoded, buf); err != nil {
r.err = err
return r.err
}
if crc(r.decoded[:dLen]) != checksum {
r.err = ErrCorrupt
return r.err
}
} else {
// Skip block completely
n -= int64(dLen)
dLen = 0
}
r.i, r.j = 0, dLen
continue
case chunkTypeUncompressedData:
// Section 4.3. Uncompressed data (chunk type 0x01).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return r.err
}
if !r.ensureBufferSize(chunkLen) {
if r.err != nil {
r.err = ErrUnsupported
}
return r.err
}
buf := r.buf[:checksumSize]
if !r.readFull(buf, false) {
return r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
// Read directly into r.decoded instead of via r.buf.
n2 := chunkLen - checksumSize
if n2 > len(r.decoded) {
if n2 > r.maxBlock {
r.err = ErrCorrupt
return r.err
}
r.decoded = make([]byte, n2)
}
if !r.readFull(r.decoded[:n2], false) {
return r.err
}
if int64(n2) < n {
if crc(r.decoded[:n2]) != checksum {
r.err = ErrCorrupt
return r.err
}
}
r.i, r.j = 0, n2
continue
case chunkTypeStreamIdentifier:
// Section 4.1. Stream identifier (chunk type 0xff).
if chunkLen != len(magicBody) {
r.err = ErrCorrupt
return r.err
}
if !r.readFull(r.buf[:len(magicBody)], false) {
return r.err
}
if string(r.buf[:len(magicBody)]) != magicBody {
if string(r.buf[:len(magicBody)]) != magicBodySnappy {
r.err = ErrCorrupt
return r.err
}
}
continue
}
if chunkType <= 0x7f {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return r.err
}
if chunkLen > maxBlockSize {
r.err = ErrUnsupported
return r.err
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.skipN(r.buf, chunkLen, false) {
return r.err
}
}
return nil
}
// ReadByte satisfies the io.ByteReader interface.
func (r *Reader) ReadByte() (byte, error) {
if r.err != nil {
return 0, r.err
}
if r.i < r.j {
c := r.decoded[r.i]
r.i++
return c, nil
}
var tmp [1]byte
for i := 0; i < 10; i++ {
n, err := r.Read(tmp[:])
if err != nil {
return 0, err
}
if n == 1 {
return tmp[0], nil
}
}
return 0, io.ErrNoProgress
}

568
vendor/github.com/klauspost/compress/s2/decode_amd64.s generated vendored Normal file
View File

@ -0,0 +1,568 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
#include "textflag.h"
#define R_TMP0 AX
#define R_TMP1 BX
#define R_LEN CX
#define R_OFF DX
#define R_SRC SI
#define R_DST DI
#define R_DBASE R8
#define R_DLEN R9
#define R_DEND R10
#define R_SBASE R11
#define R_SLEN R12
#define R_SEND R13
#define R_TMP2 R14
#define R_TMP3 R15
// The asm code generally follows the pure Go code in decode_other.go, except
// where marked with a "!!!".
// func decode(dst, src []byte) int
//
// All local variables fit into registers. The non-zero stack size is only to
// spill registers and push args when issuing a CALL. The register allocation:
// - R_TMP0 scratch
// - R_TMP1 scratch
// - R_LEN length or x (shared)
// - R_OFF offset
// - R_SRC &src[s]
// - R_DST &dst[d]
// + R_DBASE dst_base
// + R_DLEN dst_len
// + R_DEND dst_base + dst_len
// + R_SBASE src_base
// + R_SLEN src_len
// + R_SEND src_base + src_len
// - R_TMP2 used by doCopy
// - R_TMP3 used by doCopy
//
// The registers R_DBASE-R_SEND (marked with a "+") are set at the start of the
// function, and after a CALL returns, and are not otherwise modified.
//
// The d variable is implicitly R_DST - R_DBASE, and len(dst)-d is R_DEND - R_DST.
// The s variable is implicitly R_SRC - R_SBASE, and len(src)-s is R_SEND - R_SRC.
TEXT ·s2Decode(SB), NOSPLIT, $48-56
// Initialize R_SRC, R_DST and R_DBASE-R_SEND.
MOVQ dst_base+0(FP), R_DBASE
MOVQ dst_len+8(FP), R_DLEN
MOVQ R_DBASE, R_DST
MOVQ R_DBASE, R_DEND
ADDQ R_DLEN, R_DEND
MOVQ src_base+24(FP), R_SBASE
MOVQ src_len+32(FP), R_SLEN
MOVQ R_SBASE, R_SRC
MOVQ R_SBASE, R_SEND
ADDQ R_SLEN, R_SEND
XORQ R_OFF, R_OFF
loop:
// for s < len(src)
CMPQ R_SRC, R_SEND
JEQ end
// R_LEN = uint32(src[s])
//
// switch src[s] & 0x03
MOVBLZX (R_SRC), R_LEN
MOVL R_LEN, R_TMP1
ANDL $3, R_TMP1
CMPL R_TMP1, $1
JAE tagCopy
// ----------------------------------------
// The code below handles literal tags.
// case tagLiteral:
// x := uint32(src[s] >> 2)
// switch
SHRL $2, R_LEN
CMPL R_LEN, $60
JAE tagLit60Plus
// case x < 60:
// s++
INCQ R_SRC
doLit:
// This is the end of the inner "switch", when we have a literal tag.
//
// We assume that R_LEN == x and x fits in a uint32, where x is the variable
// used in the pure Go decode_other.go code.
// length = int(x) + 1
//
// Unlike the pure Go code, we don't need to check if length <= 0 because
// R_LEN can hold 64 bits, so the increment cannot overflow.
INCQ R_LEN
// Prepare to check if copying length bytes will run past the end of dst or
// src.
//
// R_TMP0 = len(dst) - d
// R_TMP1 = len(src) - s
MOVQ R_DEND, R_TMP0
SUBQ R_DST, R_TMP0
MOVQ R_SEND, R_TMP1
SUBQ R_SRC, R_TMP1
// !!! Try a faster technique for short (16 or fewer bytes) copies.
//
// if length > 16 || len(dst)-d < 16 || len(src)-s < 16 {
// goto callMemmove // Fall back on calling runtime·memmove.
// }
//
// The C++ snappy code calls this TryFastAppend. It also checks len(src)-s
// against 21 instead of 16, because it cannot assume that all of its input
// is contiguous in memory and so it needs to leave enough source bytes to
// read the next tag without refilling buffers, but Go's Decode assumes
// contiguousness (the src argument is a []byte).
CMPQ R_LEN, $16
JGT callMemmove
CMPQ R_TMP0, $16
JLT callMemmove
CMPQ R_TMP1, $16
JLT callMemmove
// !!! Implement the copy from src to dst as a 16-byte load and store.
// (Decode's documentation says that dst and src must not overlap.)
//
// This always copies 16 bytes, instead of only length bytes, but that's
// OK. If the input is a valid Snappy encoding then subsequent iterations
// will fix up the overrun. Otherwise, Decode returns a nil []byte (and a
// non-nil error), so the overrun will be ignored.
//
// Note that on amd64, it is legal and cheap to issue unaligned 8-byte or
// 16-byte loads and stores. This technique probably wouldn't be as
// effective on architectures that are fussier about alignment.
MOVOU 0(R_SRC), X0
MOVOU X0, 0(R_DST)
// d += length
// s += length
ADDQ R_LEN, R_DST
ADDQ R_LEN, R_SRC
JMP loop
callMemmove:
// if length > len(dst)-d || length > len(src)-s { etc }
CMPQ R_LEN, R_TMP0
JGT errCorrupt
CMPQ R_LEN, R_TMP1
JGT errCorrupt
// copy(dst[d:], src[s:s+length])
//
// This means calling runtime·memmove(&dst[d], &src[s], length), so we push
// R_DST, R_SRC and R_LEN as arguments. Coincidentally, we also need to spill those
// three registers to the stack, to save local variables across the CALL.
MOVQ R_DST, 0(SP)
MOVQ R_SRC, 8(SP)
MOVQ R_LEN, 16(SP)
MOVQ R_DST, 24(SP)
MOVQ R_SRC, 32(SP)
MOVQ R_LEN, 40(SP)
MOVQ R_OFF, 48(SP)
CALL runtime·memmove(SB)
// Restore local variables: unspill registers from the stack and
// re-calculate R_DBASE-R_SEND.
MOVQ 24(SP), R_DST
MOVQ 32(SP), R_SRC
MOVQ 40(SP), R_LEN
MOVQ 48(SP), R_OFF
MOVQ dst_base+0(FP), R_DBASE
MOVQ dst_len+8(FP), R_DLEN
MOVQ R_DBASE, R_DEND
ADDQ R_DLEN, R_DEND
MOVQ src_base+24(FP), R_SBASE
MOVQ src_len+32(FP), R_SLEN
MOVQ R_SBASE, R_SEND
ADDQ R_SLEN, R_SEND
// d += length
// s += length
ADDQ R_LEN, R_DST
ADDQ R_LEN, R_SRC
JMP loop
tagLit60Plus:
// !!! This fragment does the
//
// s += x - 58; if uint(s) > uint(len(src)) { etc }
//
// checks. In the asm version, we code it once instead of once per switch case.
ADDQ R_LEN, R_SRC
SUBQ $58, R_SRC
CMPQ R_SRC, R_SEND
JA errCorrupt
// case x == 60:
CMPL R_LEN, $61
JEQ tagLit61
JA tagLit62Plus
// x = uint32(src[s-1])
MOVBLZX -1(R_SRC), R_LEN
JMP doLit
tagLit61:
// case x == 61:
// x = uint32(src[s-2]) | uint32(src[s-1])<<8
MOVWLZX -2(R_SRC), R_LEN
JMP doLit
tagLit62Plus:
CMPL R_LEN, $62
JA tagLit63
// case x == 62:
// x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
// We read one byte, safe to read one back, since we are just reading tag.
// x = binary.LittleEndian.Uint32(src[s-1:]) >> 8
MOVL -4(R_SRC), R_LEN
SHRL $8, R_LEN
JMP doLit
tagLit63:
// case x == 63:
// x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
MOVL -4(R_SRC), R_LEN
JMP doLit
// The code above handles literal tags.
// ----------------------------------------
// The code below handles copy tags.
tagCopy4:
// case tagCopy4:
// s += 5
ADDQ $5, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// length = 1 + int(src[s-5])>>2
SHRQ $2, R_LEN
INCQ R_LEN
// offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
MOVLQZX -4(R_SRC), R_OFF
JMP doCopy
tagCopy2:
// case tagCopy2:
// s += 3
ADDQ $3, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// length = 1 + int(src[s-3])>>2
SHRQ $2, R_LEN
INCQ R_LEN
// offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
MOVWQZX -2(R_SRC), R_OFF
JMP doCopy
tagCopy:
// We have a copy tag. We assume that:
// - R_TMP1 == src[s] & 0x03
// - R_LEN == src[s]
CMPQ R_TMP1, $2
JEQ tagCopy2
JA tagCopy4
// case tagCopy1:
// s += 2
ADDQ $2, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
// length = 4 + int(src[s-2])>>2&0x7
MOVBQZX -1(R_SRC), R_TMP1
MOVQ R_LEN, R_TMP0
SHRQ $2, R_LEN
ANDQ $0xe0, R_TMP0
ANDQ $7, R_LEN
SHLQ $3, R_TMP0
ADDQ $4, R_LEN
ORQ R_TMP1, R_TMP0
// check if repeat code, ZF set by ORQ.
JZ repeatCode
// This is a regular copy, transfer our temporary value to R_OFF (length)
MOVQ R_TMP0, R_OFF
JMP doCopy
// This is a repeat code.
repeatCode:
// If length < 9, reuse last offset, with the length already calculated.
CMPQ R_LEN, $9
JL doCopyRepeat
// Read additional bytes for length.
JE repeatLen1
// Rare, so the extra branch shouldn't hurt too much.
CMPQ R_LEN, $10
JE repeatLen2
JMP repeatLen3
// Read repeat lengths.
repeatLen1:
// s ++
ADDQ $1, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// length = src[s-1] + 8
MOVBQZX -1(R_SRC), R_LEN
ADDL $8, R_LEN
JMP doCopyRepeat
repeatLen2:
// s +=2
ADDQ $2, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// length = uint32(src[s-2]) | (uint32(src[s-1])<<8) + (1 << 8)
MOVWQZX -2(R_SRC), R_LEN
ADDL $260, R_LEN
JMP doCopyRepeat
repeatLen3:
// s +=3
ADDQ $3, R_SRC
// if uint(s) > uint(len(src)) { etc }
CMPQ R_SRC, R_SEND
JA errCorrupt
// length = uint32(src[s-3]) | (uint32(src[s-2])<<8) | (uint32(src[s-1])<<16) + (1 << 16)
// Read one byte further back (just part of the tag, shifted out)
MOVL -4(R_SRC), R_LEN
SHRL $8, R_LEN
ADDL $65540, R_LEN
JMP doCopyRepeat
doCopy:
// This is the end of the outer "switch", when we have a copy tag.
//
// We assume that:
// - R_LEN == length && R_LEN > 0
// - R_OFF == offset
// if d < offset { etc }
MOVQ R_DST, R_TMP1
SUBQ R_DBASE, R_TMP1
CMPQ R_TMP1, R_OFF
JLT errCorrupt
// Repeat values can skip the test above, since any offset > 0 will be in dst.
doCopyRepeat:
// if offset <= 0 { etc }
CMPQ R_OFF, $0
JLE errCorrupt
// if length > len(dst)-d { etc }
MOVQ R_DEND, R_TMP1
SUBQ R_DST, R_TMP1
CMPQ R_LEN, R_TMP1
JGT errCorrupt
// forwardCopy(dst[d:d+length], dst[d-offset:]); d += length
//
// Set:
// - R_TMP2 = len(dst)-d
// - R_TMP3 = &dst[d-offset]
MOVQ R_DEND, R_TMP2
SUBQ R_DST, R_TMP2
MOVQ R_DST, R_TMP3
SUBQ R_OFF, R_TMP3
// !!! Try a faster technique for short (16 or fewer bytes) forward copies.
//
// First, try using two 8-byte load/stores, similar to the doLit technique
// above. Even if dst[d:d+length] and dst[d-offset:] can overlap, this is
// still OK if offset >= 8. Note that this has to be two 8-byte load/stores
// and not one 16-byte load/store, and the first store has to be before the
// second load, due to the overlap if offset is in the range [8, 16).
//
// if length > 16 || offset < 8 || len(dst)-d < 16 {
// goto slowForwardCopy
// }
// copy 16 bytes
// d += length
CMPQ R_LEN, $16
JGT slowForwardCopy
CMPQ R_OFF, $8
JLT slowForwardCopy
CMPQ R_TMP2, $16
JLT slowForwardCopy
MOVQ 0(R_TMP3), R_TMP0
MOVQ R_TMP0, 0(R_DST)
MOVQ 8(R_TMP3), R_TMP1
MOVQ R_TMP1, 8(R_DST)
ADDQ R_LEN, R_DST
JMP loop
slowForwardCopy:
// !!! If the forward copy is longer than 16 bytes, or if offset < 8, we
// can still try 8-byte load stores, provided we can overrun up to 10 extra
// bytes. As above, the overrun will be fixed up by subsequent iterations
// of the outermost loop.
//
// The C++ snappy code calls this technique IncrementalCopyFastPath. Its
// commentary says:
//
// ----
//
// The main part of this loop is a simple copy of eight bytes at a time
// until we've copied (at least) the requested amount of bytes. However,
// if d and d-offset are less than eight bytes apart (indicating a
// repeating pattern of length < 8), we first need to expand the pattern in
// order to get the correct results. For instance, if the buffer looks like
// this, with the eight-byte <d-offset> and <d> patterns marked as
// intervals:
//
// abxxxxxxxxxxxx
// [------] d-offset
// [------] d
//
// a single eight-byte copy from <d-offset> to <d> will repeat the pattern
// once, after which we can move <d> two bytes without moving <d-offset>:
//
// ababxxxxxxxxxx
// [------] d-offset
// [------] d
//
// and repeat the exercise until the two no longer overlap.
//
// This allows us to do very well in the special case of one single byte
// repeated many times, without taking a big hit for more general cases.
//
// The worst case of extra writing past the end of the match occurs when
// offset == 1 and length == 1; the last copy will read from byte positions
// [0..7] and write to [4..11], whereas it was only supposed to write to
// position 1. Thus, ten excess bytes.
//
// ----
//
// That "10 byte overrun" worst case is confirmed by Go's
// TestSlowForwardCopyOverrun, which also tests the fixUpSlowForwardCopy
// and finishSlowForwardCopy algorithm.
//
// if length > len(dst)-d-10 {
// goto verySlowForwardCopy
// }
SUBQ $10, R_TMP2
CMPQ R_LEN, R_TMP2
JGT verySlowForwardCopy
// We want to keep the offset, so we use R_TMP2 from here.
MOVQ R_OFF, R_TMP2
makeOffsetAtLeast8:
// !!! As above, expand the pattern so that offset >= 8 and we can use
// 8-byte load/stores.
//
// for offset < 8 {
// copy 8 bytes from dst[d-offset:] to dst[d:]
// length -= offset
// d += offset
// offset += offset
// // The two previous lines together means that d-offset, and therefore
// // R_TMP3, is unchanged.
// }
CMPQ R_TMP2, $8
JGE fixUpSlowForwardCopy
MOVQ (R_TMP3), R_TMP1
MOVQ R_TMP1, (R_DST)
SUBQ R_TMP2, R_LEN
ADDQ R_TMP2, R_DST
ADDQ R_TMP2, R_TMP2
JMP makeOffsetAtLeast8
fixUpSlowForwardCopy:
// !!! Add length (which might be negative now) to d (implied by R_DST being
// &dst[d]) so that d ends up at the right place when we jump back to the
// top of the loop. Before we do that, though, we save R_DST to R_TMP0 so that, if
// length is positive, copying the remaining length bytes will write to the
// right place.
MOVQ R_DST, R_TMP0
ADDQ R_LEN, R_DST
finishSlowForwardCopy:
// !!! Repeat 8-byte load/stores until length <= 0. Ending with a negative
// length means that we overrun, but as above, that will be fixed up by
// subsequent iterations of the outermost loop.
CMPQ R_LEN, $0
JLE loop
MOVQ (R_TMP3), R_TMP1
MOVQ R_TMP1, (R_TMP0)
ADDQ $8, R_TMP3
ADDQ $8, R_TMP0
SUBQ $8, R_LEN
JMP finishSlowForwardCopy
verySlowForwardCopy:
// verySlowForwardCopy is a simple implementation of forward copy. In C
// parlance, this is a do/while loop instead of a while loop, since we know
// that length > 0. In Go syntax:
//
// for {
// dst[d] = dst[d - offset]
// d++
// length--
// if length == 0 {
// break
// }
// }
MOVB (R_TMP3), R_TMP1
MOVB R_TMP1, (R_DST)
INCQ R_TMP3
INCQ R_DST
DECQ R_LEN
JNZ verySlowForwardCopy
JMP loop
// The code above handles copy tags.
// ----------------------------------------
end:
// This is the end of the "for s < len(src)".
//
// if d != len(dst) { etc }
CMPQ R_DST, R_DEND
JNE errCorrupt
// return 0
MOVQ $0, ret+48(FP)
RET
errCorrupt:
// return decodeErrCodeCorrupt
MOVQ $1, ret+48(FP)
RET

574
vendor/github.com/klauspost/compress/s2/decode_arm64.s generated vendored Normal file
View File

@ -0,0 +1,574 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
#include "textflag.h"
#define R_TMP0 R2
#define R_TMP1 R3
#define R_LEN R4
#define R_OFF R5
#define R_SRC R6
#define R_DST R7
#define R_DBASE R8
#define R_DLEN R9
#define R_DEND R10
#define R_SBASE R11
#define R_SLEN R12
#define R_SEND R13
#define R_TMP2 R14
#define R_TMP3 R15
// TEST_SRC will check if R_SRC is <= SRC_END
#define TEST_SRC() \
CMP R_SEND, R_SRC \
BGT errCorrupt
// MOVD R_SRC, R_TMP1
// SUB R_SBASE, R_TMP1, R_TMP1
// CMP R_SLEN, R_TMP1
// BGT errCorrupt
// The asm code generally follows the pure Go code in decode_other.go, except
// where marked with a "!!!".
// func decode(dst, src []byte) int
//
// All local variables fit into registers. The non-zero stack size is only to
// spill registers and push args when issuing a CALL. The register allocation:
// - R_TMP0 scratch
// - R_TMP1 scratch
// - R_LEN length or x
// - R_OFF offset
// - R_SRC &src[s]
// - R_DST &dst[d]
// + R_DBASE dst_base
// + R_DLEN dst_len
// + R_DEND dst_base + dst_len
// + R_SBASE src_base
// + R_SLEN src_len
// + R_SEND src_base + src_len
// - R_TMP2 used by doCopy
// - R_TMP3 used by doCopy
//
// The registers R_DBASE-R_SEND (marked with a "+") are set at the start of the
// function, and after a CALL returns, and are not otherwise modified.
//
// The d variable is implicitly R_DST - R_DBASE, and len(dst)-d is R_DEND - R_DST.
// The s variable is implicitly R_SRC - R_SBASE, and len(src)-s is R_SEND - R_SRC.
TEXT ·s2Decode(SB), NOSPLIT, $56-64
// Initialize R_SRC, R_DST and R_DBASE-R_SEND.
MOVD dst_base+0(FP), R_DBASE
MOVD dst_len+8(FP), R_DLEN
MOVD R_DBASE, R_DST
MOVD R_DBASE, R_DEND
ADD R_DLEN, R_DEND, R_DEND
MOVD src_base+24(FP), R_SBASE
MOVD src_len+32(FP), R_SLEN
MOVD R_SBASE, R_SRC
MOVD R_SBASE, R_SEND
ADD R_SLEN, R_SEND, R_SEND
MOVD $0, R_OFF
loop:
// for s < len(src)
CMP R_SEND, R_SRC
BEQ end
// R_LEN = uint32(src[s])
//
// switch src[s] & 0x03
MOVBU (R_SRC), R_LEN
MOVW R_LEN, R_TMP1
ANDW $3, R_TMP1
MOVW $1, R1
CMPW R1, R_TMP1
BGE tagCopy
// ----------------------------------------
// The code below handles literal tags.
// case tagLiteral:
// x := uint32(src[s] >> 2)
// switch
MOVW $60, R1
LSRW $2, R_LEN, R_LEN
CMPW R_LEN, R1
BLS tagLit60Plus
// case x < 60:
// s++
ADD $1, R_SRC, R_SRC
doLit:
// This is the end of the inner "switch", when we have a literal tag.
//
// We assume that R_LEN == x and x fits in a uint32, where x is the variable
// used in the pure Go decode_other.go code.
// length = int(x) + 1
//
// Unlike the pure Go code, we don't need to check if length <= 0 because
// R_LEN can hold 64 bits, so the increment cannot overflow.
ADD $1, R_LEN, R_LEN
// Prepare to check if copying length bytes will run past the end of dst or
// src.
//
// R_TMP0 = len(dst) - d
// R_TMP1 = len(src) - s
MOVD R_DEND, R_TMP0
SUB R_DST, R_TMP0, R_TMP0
MOVD R_SEND, R_TMP1
SUB R_SRC, R_TMP1, R_TMP1
// !!! Try a faster technique for short (16 or fewer bytes) copies.
//
// if length > 16 || len(dst)-d < 16 || len(src)-s < 16 {
// goto callMemmove // Fall back on calling runtime·memmove.
// }
//
// The C++ snappy code calls this TryFastAppend. It also checks len(src)-s
// against 21 instead of 16, because it cannot assume that all of its input
// is contiguous in memory and so it needs to leave enough source bytes to
// read the next tag without refilling buffers, but Go's Decode assumes
// contiguousness (the src argument is a []byte).
CMP $16, R_LEN
BGT callMemmove
CMP $16, R_TMP0
BLT callMemmove
CMP $16, R_TMP1
BLT callMemmove
// !!! Implement the copy from src to dst as a 16-byte load and store.
// (Decode's documentation says that dst and src must not overlap.)
//
// This always copies 16 bytes, instead of only length bytes, but that's
// OK. If the input is a valid Snappy encoding then subsequent iterations
// will fix up the overrun. Otherwise, Decode returns a nil []byte (and a
// non-nil error), so the overrun will be ignored.
//
// Note that on arm64, it is legal and cheap to issue unaligned 8-byte or
// 16-byte loads and stores. This technique probably wouldn't be as
// effective on architectures that are fussier about alignment.
LDP 0(R_SRC), (R_TMP2, R_TMP3)
STP (R_TMP2, R_TMP3), 0(R_DST)
// d += length
// s += length
ADD R_LEN, R_DST, R_DST
ADD R_LEN, R_SRC, R_SRC
B loop
callMemmove:
// if length > len(dst)-d || length > len(src)-s { etc }
CMP R_TMP0, R_LEN
BGT errCorrupt
CMP R_TMP1, R_LEN
BGT errCorrupt
// copy(dst[d:], src[s:s+length])
//
// This means calling runtime·memmove(&dst[d], &src[s], length), so we push
// R_DST, R_SRC and R_LEN as arguments. Coincidentally, we also need to spill those
// three registers to the stack, to save local variables across the CALL.
MOVD R_DST, 8(RSP)
MOVD R_SRC, 16(RSP)
MOVD R_LEN, 24(RSP)
MOVD R_DST, 32(RSP)
MOVD R_SRC, 40(RSP)
MOVD R_LEN, 48(RSP)
MOVD R_OFF, 56(RSP)
CALL runtime·memmove(SB)
// Restore local variables: unspill registers from the stack and
// re-calculate R_DBASE-R_SEND.
MOVD 32(RSP), R_DST
MOVD 40(RSP), R_SRC
MOVD 48(RSP), R_LEN
MOVD 56(RSP), R_OFF
MOVD dst_base+0(FP), R_DBASE
MOVD dst_len+8(FP), R_DLEN
MOVD R_DBASE, R_DEND
ADD R_DLEN, R_DEND, R_DEND
MOVD src_base+24(FP), R_SBASE
MOVD src_len+32(FP), R_SLEN
MOVD R_SBASE, R_SEND
ADD R_SLEN, R_SEND, R_SEND
// d += length
// s += length
ADD R_LEN, R_DST, R_DST
ADD R_LEN, R_SRC, R_SRC
B loop
tagLit60Plus:
// !!! This fragment does the
//
// s += x - 58; if uint(s) > uint(len(src)) { etc }
//
// checks. In the asm version, we code it once instead of once per switch case.
ADD R_LEN, R_SRC, R_SRC
SUB $58, R_SRC, R_SRC
TEST_SRC()
// case x == 60:
MOVW $61, R1
CMPW R1, R_LEN
BEQ tagLit61
BGT tagLit62Plus
// x = uint32(src[s-1])
MOVBU -1(R_SRC), R_LEN
B doLit
tagLit61:
// case x == 61:
// x = uint32(src[s-2]) | uint32(src[s-1])<<8
MOVHU -2(R_SRC), R_LEN
B doLit
tagLit62Plus:
CMPW $62, R_LEN
BHI tagLit63
// case x == 62:
// x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
MOVHU -3(R_SRC), R_LEN
MOVBU -1(R_SRC), R_TMP1
ORR R_TMP1<<16, R_LEN
B doLit
tagLit63:
// case x == 63:
// x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
MOVWU -4(R_SRC), R_LEN
B doLit
// The code above handles literal tags.
// ----------------------------------------
// The code below handles copy tags.
tagCopy4:
// case tagCopy4:
// s += 5
ADD $5, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
MOVD R_SRC, R_TMP1
SUB R_SBASE, R_TMP1, R_TMP1
CMP R_SLEN, R_TMP1
BGT errCorrupt
// length = 1 + int(src[s-5])>>2
MOVD $1, R1
ADD R_LEN>>2, R1, R_LEN
// offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
MOVWU -4(R_SRC), R_OFF
B doCopy
tagCopy2:
// case tagCopy2:
// s += 3
ADD $3, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
TEST_SRC()
// length = 1 + int(src[s-3])>>2
MOVD $1, R1
ADD R_LEN>>2, R1, R_LEN
// offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
MOVHU -2(R_SRC), R_OFF
B doCopy
tagCopy:
// We have a copy tag. We assume that:
// - R_TMP1 == src[s] & 0x03
// - R_LEN == src[s]
CMP $2, R_TMP1
BEQ tagCopy2
BGT tagCopy4
// case tagCopy1:
// s += 2
ADD $2, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
TEST_SRC()
// offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
// Calculate offset in R_TMP0 in case it is a repeat.
MOVD R_LEN, R_TMP0
AND $0xe0, R_TMP0
MOVBU -1(R_SRC), R_TMP1
ORR R_TMP0<<3, R_TMP1, R_TMP0
// length = 4 + int(src[s-2])>>2&0x7
MOVD $7, R1
AND R_LEN>>2, R1, R_LEN
ADD $4, R_LEN, R_LEN
// check if repeat code with offset 0.
CMP $0, R_TMP0
BEQ repeatCode
// This is a regular copy, transfer our temporary value to R_OFF (offset)
MOVD R_TMP0, R_OFF
B doCopy
// This is a repeat code.
repeatCode:
// If length < 9, reuse last offset, with the length already calculated.
CMP $9, R_LEN
BLT doCopyRepeat
BEQ repeatLen1
CMP $10, R_LEN
BEQ repeatLen2
repeatLen3:
// s +=3
ADD $3, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
TEST_SRC()
// length = uint32(src[s-3]) | (uint32(src[s-2])<<8) | (uint32(src[s-1])<<16) + 65540
MOVBU -1(R_SRC), R_TMP0
MOVHU -3(R_SRC), R_LEN
ORR R_TMP0<<16, R_LEN, R_LEN
ADD $65540, R_LEN, R_LEN
B doCopyRepeat
repeatLen2:
// s +=2
ADD $2, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
TEST_SRC()
// length = uint32(src[s-2]) | (uint32(src[s-1])<<8) + 260
MOVHU -2(R_SRC), R_LEN
ADD $260, R_LEN, R_LEN
B doCopyRepeat
repeatLen1:
// s +=1
ADD $1, R_SRC, R_SRC
// if uint(s) > uint(len(src)) { etc }
TEST_SRC()
// length = src[s-1] + 8
MOVBU -1(R_SRC), R_LEN
ADD $8, R_LEN, R_LEN
B doCopyRepeat
doCopy:
// This is the end of the outer "switch", when we have a copy tag.
//
// We assume that:
// - R_LEN == length && R_LEN > 0
// - R_OFF == offset
// if d < offset { etc }
MOVD R_DST, R_TMP1
SUB R_DBASE, R_TMP1, R_TMP1
CMP R_OFF, R_TMP1
BLT errCorrupt
// Repeat values can skip the test above, since any offset > 0 will be in dst.
doCopyRepeat:
// if offset <= 0 { etc }
CMP $0, R_OFF
BLE errCorrupt
// if length > len(dst)-d { etc }
MOVD R_DEND, R_TMP1
SUB R_DST, R_TMP1, R_TMP1
CMP R_TMP1, R_LEN
BGT errCorrupt
// forwardCopy(dst[d:d+length], dst[d-offset:]); d += length
//
// Set:
// - R_TMP2 = len(dst)-d
// - R_TMP3 = &dst[d-offset]
MOVD R_DEND, R_TMP2
SUB R_DST, R_TMP2, R_TMP2
MOVD R_DST, R_TMP3
SUB R_OFF, R_TMP3, R_TMP3
// !!! Try a faster technique for short (16 or fewer bytes) forward copies.
//
// First, try using two 8-byte load/stores, similar to the doLit technique
// above. Even if dst[d:d+length] and dst[d-offset:] can overlap, this is
// still OK if offset >= 8. Note that this has to be two 8-byte load/stores
// and not one 16-byte load/store, and the first store has to be before the
// second load, due to the overlap if offset is in the range [8, 16).
//
// if length > 16 || offset < 8 || len(dst)-d < 16 {
// goto slowForwardCopy
// }
// copy 16 bytes
// d += length
CMP $16, R_LEN
BGT slowForwardCopy
CMP $8, R_OFF
BLT slowForwardCopy
CMP $16, R_TMP2
BLT slowForwardCopy
MOVD 0(R_TMP3), R_TMP0
MOVD R_TMP0, 0(R_DST)
MOVD 8(R_TMP3), R_TMP1
MOVD R_TMP1, 8(R_DST)
ADD R_LEN, R_DST, R_DST
B loop
slowForwardCopy:
// !!! If the forward copy is longer than 16 bytes, or if offset < 8, we
// can still try 8-byte load stores, provided we can overrun up to 10 extra
// bytes. As above, the overrun will be fixed up by subsequent iterations
// of the outermost loop.
//
// The C++ snappy code calls this technique IncrementalCopyFastPath. Its
// commentary says:
//
// ----
//
// The main part of this loop is a simple copy of eight bytes at a time
// until we've copied (at least) the requested amount of bytes. However,
// if d and d-offset are less than eight bytes apart (indicating a
// repeating pattern of length < 8), we first need to expand the pattern in
// order to get the correct results. For instance, if the buffer looks like
// this, with the eight-byte <d-offset> and <d> patterns marked as
// intervals:
//
// abxxxxxxxxxxxx
// [------] d-offset
// [------] d
//
// a single eight-byte copy from <d-offset> to <d> will repeat the pattern
// once, after which we can move <d> two bytes without moving <d-offset>:
//
// ababxxxxxxxxxx
// [------] d-offset
// [------] d
//
// and repeat the exercise until the two no longer overlap.
//
// This allows us to do very well in the special case of one single byte
// repeated many times, without taking a big hit for more general cases.
//
// The worst case of extra writing past the end of the match occurs when
// offset == 1 and length == 1; the last copy will read from byte positions
// [0..7] and write to [4..11], whereas it was only supposed to write to
// position 1. Thus, ten excess bytes.
//
// ----
//
// That "10 byte overrun" worst case is confirmed by Go's
// TestSlowForwardCopyOverrun, which also tests the fixUpSlowForwardCopy
// and finishSlowForwardCopy algorithm.
//
// if length > len(dst)-d-10 {
// goto verySlowForwardCopy
// }
SUB $10, R_TMP2, R_TMP2
CMP R_TMP2, R_LEN
BGT verySlowForwardCopy
// We want to keep the offset, so we use R_TMP2 from here.
MOVD R_OFF, R_TMP2
makeOffsetAtLeast8:
// !!! As above, expand the pattern so that offset >= 8 and we can use
// 8-byte load/stores.
//
// for offset < 8 {
// copy 8 bytes from dst[d-offset:] to dst[d:]
// length -= offset
// d += offset
// offset += offset
// // The two previous lines together means that d-offset, and therefore
// // R_TMP3, is unchanged.
// }
CMP $8, R_TMP2
BGE fixUpSlowForwardCopy
MOVD (R_TMP3), R_TMP1
MOVD R_TMP1, (R_DST)
SUB R_TMP2, R_LEN, R_LEN
ADD R_TMP2, R_DST, R_DST
ADD R_TMP2, R_TMP2, R_TMP2
B makeOffsetAtLeast8
fixUpSlowForwardCopy:
// !!! Add length (which might be negative now) to d (implied by R_DST being
// &dst[d]) so that d ends up at the right place when we jump back to the
// top of the loop. Before we do that, though, we save R_DST to R_TMP0 so that, if
// length is positive, copying the remaining length bytes will write to the
// right place.
MOVD R_DST, R_TMP0
ADD R_LEN, R_DST, R_DST
finishSlowForwardCopy:
// !!! Repeat 8-byte load/stores until length <= 0. Ending with a negative
// length means that we overrun, but as above, that will be fixed up by
// subsequent iterations of the outermost loop.
MOVD $0, R1
CMP R1, R_LEN
BLE loop
MOVD (R_TMP3), R_TMP1
MOVD R_TMP1, (R_TMP0)
ADD $8, R_TMP3, R_TMP3
ADD $8, R_TMP0, R_TMP0
SUB $8, R_LEN, R_LEN
B finishSlowForwardCopy
verySlowForwardCopy:
// verySlowForwardCopy is a simple implementation of forward copy. In C
// parlance, this is a do/while loop instead of a while loop, since we know
// that length > 0. In Go syntax:
//
// for {
// dst[d] = dst[d - offset]
// d++
// length--
// if length == 0 {
// break
// }
// }
MOVB (R_TMP3), R_TMP1
MOVB R_TMP1, (R_DST)
ADD $1, R_TMP3, R_TMP3
ADD $1, R_DST, R_DST
SUB $1, R_LEN, R_LEN
CBNZ R_LEN, verySlowForwardCopy
B loop
// The code above handles copy tags.
// ----------------------------------------
end:
// This is the end of the "for s < len(src)".
//
// if d != len(dst) { etc }
CMP R_DEND, R_DST
BNE errCorrupt
// return 0
MOVD $0, ret+48(FP)
RET
errCorrupt:
// return decodeErrCodeCorrupt
MOVD $1, R_TMP0
MOVD R_TMP0, ret+48(FP)
RET

17
vendor/github.com/klauspost/compress/s2/decode_asm.go generated vendored Normal file
View File

@ -0,0 +1,17 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build (amd64 || arm64) && !appengine && gc && !noasm
// +build amd64 arm64
// +build !appengine
// +build gc
// +build !noasm
package s2
// decode has the same semantics as in decode_other.go.
//
//go:noescape
func s2Decode(dst, src []byte) int

267
vendor/github.com/klauspost/compress/s2/decode_other.go generated vendored Normal file
View File

@ -0,0 +1,267 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build (!amd64 && !arm64) || appengine || !gc || noasm
// +build !amd64,!arm64 appengine !gc noasm
package s2
import (
"fmt"
"strconv"
)
// decode writes the decoding of src to dst. It assumes that the varint-encoded
// length of the decompressed bytes has already been read, and that len(dst)
// equals that length.
//
// It returns 0 on success or a decodeErrCodeXxx error code on failure.
func s2Decode(dst, src []byte) int {
const debug = false
if debug {
fmt.Println("Starting decode, dst len:", len(dst))
}
var d, s, length int
offset := 0
// As long as we can read at least 5 bytes...
for s < len(src)-5 {
switch src[s] & 0x03 {
case tagLiteral:
x := uint32(src[s] >> 2)
switch {
case x < 60:
s++
case x == 60:
s += 2
x = uint32(src[s-1])
case x == 61:
s += 3
x = uint32(src[s-2]) | uint32(src[s-1])<<8
case x == 62:
s += 4
x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
case x == 63:
s += 5
x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
}
length = int(x) + 1
if length > len(dst)-d || length > len(src)-s || (strconv.IntSize == 32 && length <= 0) {
return decodeErrCodeCorrupt
}
if debug {
fmt.Println("literals, length:", length, "d-after:", d+length)
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
length = int(src[s-2]) >> 2 & 0x7
toffset := int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
if toffset == 0 {
if debug {
fmt.Print("(repeat) ")
}
// keep last offset
switch length {
case 5:
s += 1
length = int(uint32(src[s-1])) + 4
case 6:
s += 2
length = int(uint32(src[s-2])|(uint32(src[s-1])<<8)) + (1 << 8)
case 7:
s += 3
length = int(uint32(src[s-3])|(uint32(src[s-2])<<8)|(uint32(src[s-1])<<16)) + (1 << 16)
default: // 0-> 4
}
} else {
offset = toffset
}
length += 4
case tagCopy2:
s += 3
length = 1 + int(src[s-3])>>2
offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
case tagCopy4:
s += 5
length = 1 + int(src[s-5])>>2
offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
}
if offset <= 0 || d < offset || length > len(dst)-d {
return decodeErrCodeCorrupt
}
if debug {
fmt.Println("copy, length:", length, "offset:", offset, "d-after:", d+length)
}
// Copy from an earlier sub-slice of dst to a later sub-slice.
// If no overlap, use the built-in copy:
if offset > length {
copy(dst[d:d+length], dst[d-offset:])
d += length
continue
}
// Unlike the built-in copy function, this byte-by-byte copy always runs
// forwards, even if the slices overlap. Conceptually, this is:
//
// d += forwardCopy(dst[d:d+length], dst[d-offset:])
//
// We align the slices into a and b and show the compiler they are the same size.
// This allows the loop to run without bounds checks.
a := dst[d : d+length]
b := dst[d-offset:]
b = b[:len(a)]
for i := range a {
a[i] = b[i]
}
d += length
}
// Remaining with extra checks...
for s < len(src) {
switch src[s] & 0x03 {
case tagLiteral:
x := uint32(src[s] >> 2)
switch {
case x < 60:
s++
case x == 60:
s += 2
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-1])
case x == 61:
s += 3
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-2]) | uint32(src[s-1])<<8
case x == 62:
s += 4
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
case x == 63:
s += 5
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
}
length = int(x) + 1
if length > len(dst)-d || length > len(src)-s || (strconv.IntSize == 32 && length <= 0) {
return decodeErrCodeCorrupt
}
if debug {
fmt.Println("literals, length:", length, "d-after:", d+length)
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = int(src[s-2]) >> 2 & 0x7
toffset := int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
if toffset == 0 {
if debug {
fmt.Print("(repeat) ")
}
// keep last offset
switch length {
case 5:
s += 1
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = int(uint32(src[s-1])) + 4
case 6:
s += 2
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = int(uint32(src[s-2])|(uint32(src[s-1])<<8)) + (1 << 8)
case 7:
s += 3
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = int(uint32(src[s-3])|(uint32(src[s-2])<<8)|(uint32(src[s-1])<<16)) + (1 << 16)
default: // 0-> 4
}
} else {
offset = toffset
}
length += 4
case tagCopy2:
s += 3
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = 1 + int(src[s-3])>>2
offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
case tagCopy4:
s += 5
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = 1 + int(src[s-5])>>2
offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
}
if offset <= 0 || d < offset || length > len(dst)-d {
return decodeErrCodeCorrupt
}
if debug {
fmt.Println("copy, length:", length, "offset:", offset, "d-after:", d+length)
}
// Copy from an earlier sub-slice of dst to a later sub-slice.
// If no overlap, use the built-in copy:
if offset > length {
copy(dst[d:d+length], dst[d-offset:])
d += length
continue
}
// Unlike the built-in copy function, this byte-by-byte copy always runs
// forwards, even if the slices overlap. Conceptually, this is:
//
// d += forwardCopy(dst[d:d+length], dst[d-offset:])
//
// We align the slices into a and b and show the compiler they are the same size.
// This allows the loop to run without bounds checks.
a := dst[d : d+length]
b := dst[d-offset:]
b = b[:len(a)]
for i := range a {
a[i] = b[i]
}
d += length
}
if d != len(dst) {
return decodeErrCodeCorrupt
}
return 0
}

1172
vendor/github.com/klauspost/compress/s2/encode.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

456
vendor/github.com/klauspost/compress/s2/encode_all.go generated vendored Normal file
View File

@ -0,0 +1,456 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package s2
import (
"bytes"
"encoding/binary"
"math/bits"
)
func load32(b []byte, i int) uint32 {
return binary.LittleEndian.Uint32(b[i:])
}
func load64(b []byte, i int) uint64 {
return binary.LittleEndian.Uint64(b[i:])
}
// hash6 returns the hash of the lowest 6 bytes of u to fit in a hash table with h bits.
// Preferably h should be a constant and should always be <64.
func hash6(u uint64, h uint8) uint32 {
const prime6bytes = 227718039650203
return uint32(((u << (64 - 48)) * prime6bytes) >> ((64 - h) & 63))
}
func encodeGo(dst, src []byte) []byte {
if n := MaxEncodedLen(len(src)); n < 0 {
panic(ErrTooLarge)
} else if len(dst) < n {
dst = make([]byte, n)
}
// The block starts with the varint-encoded length of the decompressed bytes.
d := binary.PutUvarint(dst, uint64(len(src)))
if len(src) == 0 {
return dst[:d]
}
if len(src) < minNonLiteralBlockSize {
d += emitLiteral(dst[d:], src)
return dst[:d]
}
n := encodeBlockGo(dst[d:], src)
if n > 0 {
d += n
return dst[:d]
}
// Not compressible
d += emitLiteral(dst[d:], src)
return dst[:d]
}
// encodeBlockGo encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockGo(dst, src []byte) (d int) {
// Initialize the hash table.
const (
tableBits = 14
maxTableSize = 1 << tableBits
debug = false
)
var table [maxTableSize]uint32
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
// Bail if we can't compress to at least this.
dstLimit := len(src) - len(src)>>5 - 5
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We search for a repeat at -1, but don't output repeats when nextEmit == 0
repeat := 1
for {
candidate := 0
for {
// Next src position to check
nextS := s + (s-nextEmit)>>6 + 4
if nextS > sLimit {
goto emitRemainder
}
hash0 := hash6(cv, tableBits)
hash1 := hash6(cv>>8, tableBits)
candidate = int(table[hash0])
candidate2 := int(table[hash1])
table[hash0] = uint32(s)
table[hash1] = uint32(s + 1)
hash2 := hash6(cv>>16, tableBits)
// Check repeat at offset checkRep.
const checkRep = 1
if uint32(cv>>(checkRep*8)) == load32(src, s-repeat+checkRep) {
base := s + checkRep
// Extend back
for i := base - repeat; base > nextEmit && i > 0 && src[i-1] == src[base-1]; {
i--
base--
}
d += emitLiteral(dst[d:], src[nextEmit:base])
// Extend forward
candidate := s - repeat + 4 + checkRep
s += 4 + checkRep
for s <= sLimit {
if diff := load64(src, s) ^ load64(src, candidate); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidate += 8
}
if debug {
// Validate match.
if s <= candidate {
panic("s <= candidate")
}
a := src[base:s]
b := src[base-repeat : base-repeat+(s-base)]
if !bytes.Equal(a, b) {
panic("mismatch")
}
}
if nextEmit > 0 {
// same as `add := emitCopy(dst[d:], repeat, s-base)` but skips storing offset.
d += emitRepeat(dst[d:], repeat, s-base)
} else {
// First match, cannot be repeat.
d += emitCopy(dst[d:], repeat, s-base)
}
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
if uint32(cv) == load32(src, candidate) {
break
}
candidate = int(table[hash2])
if uint32(cv>>8) == load32(src, candidate2) {
table[hash2] = uint32(s + 2)
candidate = candidate2
s++
break
}
table[hash2] = uint32(s + 2)
if uint32(cv>>16) == load32(src, candidate) {
s += 2
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards.
// The top bytes will be rechecked to get the full match.
for candidate > 0 && s > nextEmit && src[candidate-1] == src[s-1] {
candidate--
s--
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
// A 4-byte match has been found. We'll later see if more than 4 bytes
// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit
// them as literal bytes.
d += emitLiteral(dst[d:], src[nextEmit:s])
// Call emitCopy, and then see if another emitCopy could be our next
// move. Repeat until we find no match for the input immediately after
// what was consumed by the last emitCopy call.
//
// If we exit this loop normally then we need to call emitLiteral next,
// though we don't yet know how big the literal will be. We handle that
// by proceeding to the next iteration of the main loop. We also can
// exit this loop via goto if we get close to exhausting the input.
for {
// Invariant: we have a 4-byte match at s, and no need to emit any
// literal bytes prior to s.
base := s
repeat = base - candidate
// Extend the 4-byte match as long as possible.
s += 4
candidate += 4
for s <= len(src)-8 {
if diff := load64(src, s) ^ load64(src, candidate); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidate += 8
}
d += emitCopy(dst[d:], repeat, s-base)
if debug {
// Validate match.
if s <= candidate {
panic("s <= candidate")
}
a := src[base:s]
b := src[base-repeat : base-repeat+(s-base)]
if !bytes.Equal(a, b) {
panic("mismatch")
}
}
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Check for an immediate match, otherwise start search at s+1
x := load64(src, s-2)
m2Hash := hash6(x, tableBits)
currHash := hash6(x>>16, tableBits)
candidate = int(table[currHash])
table[m2Hash] = uint32(s - 2)
table[currHash] = uint32(s)
if debug && s == candidate {
panic("s == candidate")
}
if uint32(x>>16) != load32(src, candidate) {
cv = load64(src, s+1)
s++
break
}
}
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}
func encodeBlockSnappyGo(dst, src []byte) (d int) {
// Initialize the hash table.
const (
tableBits = 14
maxTableSize = 1 << tableBits
)
var table [maxTableSize]uint32
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
// Bail if we can't compress to at least this.
dstLimit := len(src) - len(src)>>5 - 5
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We search for a repeat at -1, but don't output repeats when nextEmit == 0
repeat := 1
for {
candidate := 0
for {
// Next src position to check
nextS := s + (s-nextEmit)>>6 + 4
if nextS > sLimit {
goto emitRemainder
}
hash0 := hash6(cv, tableBits)
hash1 := hash6(cv>>8, tableBits)
candidate = int(table[hash0])
candidate2 := int(table[hash1])
table[hash0] = uint32(s)
table[hash1] = uint32(s + 1)
hash2 := hash6(cv>>16, tableBits)
// Check repeat at offset checkRep.
const checkRep = 1
if uint32(cv>>(checkRep*8)) == load32(src, s-repeat+checkRep) {
base := s + checkRep
// Extend back
for i := base - repeat; base > nextEmit && i > 0 && src[i-1] == src[base-1]; {
i--
base--
}
d += emitLiteral(dst[d:], src[nextEmit:base])
// Extend forward
candidate := s - repeat + 4 + checkRep
s += 4 + checkRep
for s <= sLimit {
if diff := load64(src, s) ^ load64(src, candidate); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidate += 8
}
d += emitCopyNoRepeat(dst[d:], repeat, s-base)
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
if uint32(cv) == load32(src, candidate) {
break
}
candidate = int(table[hash2])
if uint32(cv>>8) == load32(src, candidate2) {
table[hash2] = uint32(s + 2)
candidate = candidate2
s++
break
}
table[hash2] = uint32(s + 2)
if uint32(cv>>16) == load32(src, candidate) {
s += 2
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards
for candidate > 0 && s > nextEmit && src[candidate-1] == src[s-1] {
candidate--
s--
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
// A 4-byte match has been found. We'll later see if more than 4 bytes
// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit
// them as literal bytes.
d += emitLiteral(dst[d:], src[nextEmit:s])
// Call emitCopy, and then see if another emitCopy could be our next
// move. Repeat until we find no match for the input immediately after
// what was consumed by the last emitCopy call.
//
// If we exit this loop normally then we need to call emitLiteral next,
// though we don't yet know how big the literal will be. We handle that
// by proceeding to the next iteration of the main loop. We also can
// exit this loop via goto if we get close to exhausting the input.
for {
// Invariant: we have a 4-byte match at s, and no need to emit any
// literal bytes prior to s.
base := s
repeat = base - candidate
// Extend the 4-byte match as long as possible.
s += 4
candidate += 4
for s <= len(src)-8 {
if diff := load64(src, s) ^ load64(src, candidate); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidate += 8
}
d += emitCopyNoRepeat(dst[d:], repeat, s-base)
if false {
// Validate match.
a := src[base:s]
b := src[base-repeat : base-repeat+(s-base)]
if !bytes.Equal(a, b) {
panic("mismatch")
}
}
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Check for an immediate match, otherwise start search at s+1
x := load64(src, s-2)
m2Hash := hash6(x, tableBits)
currHash := hash6(x>>16, tableBits)
candidate = int(table[currHash])
table[m2Hash] = uint32(s - 2)
table[currHash] = uint32(s)
if uint32(x>>16) != load32(src, candidate) {
cv = load64(src, s+1)
s++
break
}
}
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}

142
vendor/github.com/klauspost/compress/s2/encode_amd64.go generated vendored Normal file
View File

@ -0,0 +1,142 @@
//go:build !appengine && !noasm && gc
// +build !appengine,!noasm,gc
package s2
// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlock(dst, src []byte) (d int) {
const (
// Use 12 bit table when less than...
limit12B = 16 << 10
// Use 10 bit table when less than...
limit10B = 4 << 10
// Use 8 bit table when less than...
limit8B = 512
)
if len(src) >= 4<<20 {
return encodeBlockAsm(dst, src)
}
if len(src) >= limit12B {
return encodeBlockAsm4MB(dst, src)
}
if len(src) >= limit10B {
return encodeBlockAsm12B(dst, src)
}
if len(src) >= limit8B {
return encodeBlockAsm10B(dst, src)
}
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeBlockAsm8B(dst, src)
}
// encodeBlockBetter encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBetter(dst, src []byte) (d int) {
const (
// Use 12 bit table when less than...
limit12B = 16 << 10
// Use 10 bit table when less than...
limit10B = 4 << 10
// Use 8 bit table when less than...
limit8B = 512
)
if len(src) > 4<<20 {
return encodeBetterBlockAsm(dst, src)
}
if len(src) >= limit12B {
return encodeBetterBlockAsm4MB(dst, src)
}
if len(src) >= limit10B {
return encodeBetterBlockAsm12B(dst, src)
}
if len(src) >= limit8B {
return encodeBetterBlockAsm10B(dst, src)
}
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeBetterBlockAsm8B(dst, src)
}
// encodeBlockSnappy encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockSnappy(dst, src []byte) (d int) {
const (
// Use 12 bit table when less than...
limit12B = 16 << 10
// Use 10 bit table when less than...
limit10B = 4 << 10
// Use 8 bit table when less than...
limit8B = 512
)
if len(src) >= 64<<10 {
return encodeSnappyBlockAsm(dst, src)
}
if len(src) >= limit12B {
return encodeSnappyBlockAsm64K(dst, src)
}
if len(src) >= limit10B {
return encodeSnappyBlockAsm12B(dst, src)
}
if len(src) >= limit8B {
return encodeSnappyBlockAsm10B(dst, src)
}
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeSnappyBlockAsm8B(dst, src)
}
// encodeBlockSnappy encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBetterSnappy(dst, src []byte) (d int) {
const (
// Use 12 bit table when less than...
limit12B = 16 << 10
// Use 10 bit table when less than...
limit10B = 4 << 10
// Use 8 bit table when less than...
limit8B = 512
)
if len(src) >= 64<<10 {
return encodeSnappyBetterBlockAsm(dst, src)
}
if len(src) >= limit12B {
return encodeSnappyBetterBlockAsm64K(dst, src)
}
if len(src) >= limit10B {
return encodeSnappyBetterBlockAsm12B(dst, src)
}
if len(src) >= limit8B {
return encodeSnappyBetterBlockAsm10B(dst, src)
}
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeSnappyBetterBlockAsm8B(dst, src)
}

604
vendor/github.com/klauspost/compress/s2/encode_best.go generated vendored Normal file
View File

@ -0,0 +1,604 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package s2
import (
"fmt"
"math/bits"
)
// encodeBlockBest encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBest(dst, src []byte) (d int) {
// Initialize the hash tables.
const (
// Long hash matches.
lTableBits = 19
maxLTableSize = 1 << lTableBits
// Short hash matches.
sTableBits = 16
maxSTableSize = 1 << sTableBits
inputMargin = 8 + 2
)
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
if len(src) < minNonLiteralBlockSize {
return 0
}
var lTable [maxLTableSize]uint64
var sTable [maxSTableSize]uint64
// Bail if we can't compress to at least this.
dstLimit := len(src) - 5
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We search for a repeat at -1, but don't output repeats when nextEmit == 0
repeat := 1
const lowbitMask = 0xffffffff
getCur := func(x uint64) int {
return int(x & lowbitMask)
}
getPrev := func(x uint64) int {
return int(x >> 32)
}
const maxSkip = 64
for {
type match struct {
offset int
s int
length int
score int
rep bool
}
var best match
for {
// Next src position to check
nextS := (s-nextEmit)>>8 + 1
if nextS > maxSkip {
nextS = s + maxSkip
} else {
nextS += s
}
if nextS > sLimit {
goto emitRemainder
}
hashL := hash8(cv, lTableBits)
hashS := hash4(cv, sTableBits)
candidateL := lTable[hashL]
candidateS := sTable[hashS]
score := func(m match) int {
// Matches that are longer forward are penalized since we must emit it as a literal.
score := m.length - m.s
if nextEmit == m.s {
// If we do not have to emit literals, we save 1 byte
score++
}
offset := m.s - m.offset
if m.rep {
return score - emitRepeatSize(offset, m.length)
}
return score - emitCopySize(offset, m.length)
}
matchAt := func(offset, s int, first uint32, rep bool) match {
if best.length != 0 && best.s-best.offset == s-offset {
// Don't retest if we have the same offset.
return match{offset: offset, s: s}
}
if load32(src, offset) != first {
return match{offset: offset, s: s}
}
m := match{offset: offset, s: s, length: 4 + offset, rep: rep}
s += 4
for s <= sLimit {
if diff := load64(src, s) ^ load64(src, m.length); diff != 0 {
m.length += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
m.length += 8
}
m.length -= offset
m.score = score(m)
if m.score <= -m.s {
// Eliminate if no savings, we might find a better one.
m.length = 0
}
return m
}
bestOf := func(a, b match) match {
if b.length == 0 {
return a
}
if a.length == 0 {
return b
}
as := a.score + b.s
bs := b.score + a.s
if as >= bs {
return a
}
return b
}
best = bestOf(matchAt(getCur(candidateL), s, uint32(cv), false), matchAt(getPrev(candidateL), s, uint32(cv), false))
best = bestOf(best, matchAt(getCur(candidateS), s, uint32(cv), false))
best = bestOf(best, matchAt(getPrev(candidateS), s, uint32(cv), false))
{
best = bestOf(best, matchAt(s-repeat+1, s+1, uint32(cv>>8), true))
if best.length > 0 {
// s+1
nextShort := sTable[hash4(cv>>8, sTableBits)]
s := s + 1
cv := load64(src, s)
nextLong := lTable[hash8(cv, lTableBits)]
best = bestOf(best, matchAt(getCur(nextShort), s, uint32(cv), false))
best = bestOf(best, matchAt(getPrev(nextShort), s, uint32(cv), false))
best = bestOf(best, matchAt(getCur(nextLong), s, uint32(cv), false))
best = bestOf(best, matchAt(getPrev(nextLong), s, uint32(cv), false))
// Repeat at + 2
best = bestOf(best, matchAt(s-repeat+1, s+1, uint32(cv>>8), true))
// s+2
if true {
nextShort = sTable[hash4(cv>>8, sTableBits)]
s++
cv = load64(src, s)
nextLong = lTable[hash8(cv, lTableBits)]
best = bestOf(best, matchAt(getCur(nextShort), s, uint32(cv), false))
best = bestOf(best, matchAt(getPrev(nextShort), s, uint32(cv), false))
best = bestOf(best, matchAt(getCur(nextLong), s, uint32(cv), false))
best = bestOf(best, matchAt(getPrev(nextLong), s, uint32(cv), false))
}
// Search for a match at best match end, see if that is better.
if sAt := best.s + best.length; sAt < sLimit {
sBack := best.s
backL := best.length
// Load initial values
cv = load64(src, sBack)
// Search for mismatch
next := lTable[hash8(load64(src, sAt), lTableBits)]
//next := sTable[hash4(load64(src, sAt), sTableBits)]
if checkAt := getCur(next) - backL; checkAt > 0 {
best = bestOf(best, matchAt(checkAt, sBack, uint32(cv), false))
}
if checkAt := getPrev(next) - backL; checkAt > 0 {
best = bestOf(best, matchAt(checkAt, sBack, uint32(cv), false))
}
}
}
}
// Update table
lTable[hashL] = uint64(s) | candidateL<<32
sTable[hashS] = uint64(s) | candidateS<<32
if best.length > 0 {
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards, not needed for repeats...
s = best.s
if !best.rep {
for best.offset > 0 && s > nextEmit && src[best.offset-1] == src[s-1] {
best.offset--
best.length++
s--
}
}
if false && best.offset >= s {
panic(fmt.Errorf("t %d >= s %d", best.offset, s))
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
base := s
offset := s - best.offset
s += best.length
if offset > 65535 && s-base <= 5 && !best.rep {
// Bail if the match is equal or worse to the encoding.
s = best.s + 1
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
d += emitLiteral(dst[d:], src[nextEmit:base])
if best.rep {
if nextEmit > 0 {
// same as `add := emitCopy(dst[d:], repeat, s-base)` but skips storing offset.
d += emitRepeat(dst[d:], offset, best.length)
} else {
// First match, cannot be repeat.
d += emitCopy(dst[d:], offset, best.length)
}
} else {
d += emitCopy(dst[d:], offset, best.length)
}
repeat = offset
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Fill tables...
for i := best.s + 1; i < s; i++ {
cv0 := load64(src, i)
long0 := hash8(cv0, lTableBits)
short0 := hash4(cv0, sTableBits)
lTable[long0] = uint64(i) | lTable[long0]<<32
sTable[short0] = uint64(i) | sTable[short0]<<32
}
cv = load64(src, s)
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}
// encodeBlockBestSnappy encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBestSnappy(dst, src []byte) (d int) {
// Initialize the hash tables.
const (
// Long hash matches.
lTableBits = 19
maxLTableSize = 1 << lTableBits
// Short hash matches.
sTableBits = 16
maxSTableSize = 1 << sTableBits
inputMargin = 8 + 2
)
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
if len(src) < minNonLiteralBlockSize {
return 0
}
var lTable [maxLTableSize]uint64
var sTable [maxSTableSize]uint64
// Bail if we can't compress to at least this.
dstLimit := len(src) - 5
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We search for a repeat at -1, but don't output repeats when nextEmit == 0
repeat := 1
const lowbitMask = 0xffffffff
getCur := func(x uint64) int {
return int(x & lowbitMask)
}
getPrev := func(x uint64) int {
return int(x >> 32)
}
const maxSkip = 64
for {
type match struct {
offset int
s int
length int
score int
}
var best match
for {
// Next src position to check
nextS := (s-nextEmit)>>8 + 1
if nextS > maxSkip {
nextS = s + maxSkip
} else {
nextS += s
}
if nextS > sLimit {
goto emitRemainder
}
hashL := hash8(cv, lTableBits)
hashS := hash4(cv, sTableBits)
candidateL := lTable[hashL]
candidateS := sTable[hashS]
score := func(m match) int {
// Matches that are longer forward are penalized since we must emit it as a literal.
score := m.length - m.s
if nextEmit == m.s {
// If we do not have to emit literals, we save 1 byte
score++
}
offset := m.s - m.offset
return score - emitCopySize(offset, m.length)
}
matchAt := func(offset, s int, first uint32) match {
if best.length != 0 && best.s-best.offset == s-offset {
// Don't retest if we have the same offset.
return match{offset: offset, s: s}
}
if load32(src, offset) != first {
return match{offset: offset, s: s}
}
m := match{offset: offset, s: s, length: 4 + offset}
s += 4
for s <= sLimit {
if diff := load64(src, s) ^ load64(src, m.length); diff != 0 {
m.length += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
m.length += 8
}
m.length -= offset
m.score = score(m)
if m.score <= -m.s {
// Eliminate if no savings, we might find a better one.
m.length = 0
}
return m
}
bestOf := func(a, b match) match {
if b.length == 0 {
return a
}
if a.length == 0 {
return b
}
as := a.score + b.s
bs := b.score + a.s
if as >= bs {
return a
}
return b
}
best = bestOf(matchAt(getCur(candidateL), s, uint32(cv)), matchAt(getPrev(candidateL), s, uint32(cv)))
best = bestOf(best, matchAt(getCur(candidateS), s, uint32(cv)))
best = bestOf(best, matchAt(getPrev(candidateS), s, uint32(cv)))
{
best = bestOf(best, matchAt(s-repeat+1, s+1, uint32(cv>>8)))
if best.length > 0 {
// s+1
nextShort := sTable[hash4(cv>>8, sTableBits)]
s := s + 1
cv := load64(src, s)
nextLong := lTable[hash8(cv, lTableBits)]
best = bestOf(best, matchAt(getCur(nextShort), s, uint32(cv)))
best = bestOf(best, matchAt(getPrev(nextShort), s, uint32(cv)))
best = bestOf(best, matchAt(getCur(nextLong), s, uint32(cv)))
best = bestOf(best, matchAt(getPrev(nextLong), s, uint32(cv)))
// Repeat at + 2
best = bestOf(best, matchAt(s-repeat+1, s+1, uint32(cv>>8)))
// s+2
if true {
nextShort = sTable[hash4(cv>>8, sTableBits)]
s++
cv = load64(src, s)
nextLong = lTable[hash8(cv, lTableBits)]
best = bestOf(best, matchAt(getCur(nextShort), s, uint32(cv)))
best = bestOf(best, matchAt(getPrev(nextShort), s, uint32(cv)))
best = bestOf(best, matchAt(getCur(nextLong), s, uint32(cv)))
best = bestOf(best, matchAt(getPrev(nextLong), s, uint32(cv)))
}
// Search for a match at best match end, see if that is better.
if sAt := best.s + best.length; sAt < sLimit {
sBack := best.s
backL := best.length
// Load initial values
cv = load64(src, sBack)
// Search for mismatch
next := lTable[hash8(load64(src, sAt), lTableBits)]
//next := sTable[hash4(load64(src, sAt), sTableBits)]
if checkAt := getCur(next) - backL; checkAt > 0 {
best = bestOf(best, matchAt(checkAt, sBack, uint32(cv)))
}
if checkAt := getPrev(next) - backL; checkAt > 0 {
best = bestOf(best, matchAt(checkAt, sBack, uint32(cv)))
}
}
}
}
// Update table
lTable[hashL] = uint64(s) | candidateL<<32
sTable[hashS] = uint64(s) | candidateS<<32
if best.length > 0 {
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards, not needed for repeats...
s = best.s
if true {
for best.offset > 0 && s > nextEmit && src[best.offset-1] == src[s-1] {
best.offset--
best.length++
s--
}
}
if false && best.offset >= s {
panic(fmt.Errorf("t %d >= s %d", best.offset, s))
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
base := s
offset := s - best.offset
s += best.length
if offset > 65535 && s-base <= 5 {
// Bail if the match is equal or worse to the encoding.
s = best.s + 1
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
d += emitLiteral(dst[d:], src[nextEmit:base])
d += emitCopyNoRepeat(dst[d:], offset, best.length)
repeat = offset
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Fill tables...
for i := best.s + 1; i < s; i++ {
cv0 := load64(src, i)
long0 := hash8(cv0, lTableBits)
short0 := hash4(cv0, sTableBits)
lTable[long0] = uint64(i) | lTable[long0]<<32
sTable[short0] = uint64(i) | sTable[short0]<<32
}
cv = load64(src, s)
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}
// emitCopySize returns the size to encode the offset+length
//
// It assumes that:
// 1 <= offset && offset <= math.MaxUint32
// 4 <= length && length <= 1 << 24
func emitCopySize(offset, length int) int {
if offset >= 65536 {
i := 0
if length > 64 {
length -= 64
if length >= 4 {
// Emit remaining as repeats
return 5 + emitRepeatSize(offset, length)
}
i = 5
}
if length == 0 {
return i
}
return i + 5
}
// Offset no more than 2 bytes.
if length > 64 {
// Emit remaining as repeats, at least 4 bytes remain.
return 3 + emitRepeatSize(offset, length-60)
}
if length >= 12 || offset >= 2048 {
return 3
}
// Emit the remaining copy, encoded as 2 bytes.
return 2
}
// emitRepeatSize returns the number of bytes required to encode a repeat.
// Length must be at least 4 and < 1<<24
func emitRepeatSize(offset, length int) int {
// Repeat offset, make length cheaper
if length <= 4+4 || (length < 8+4 && offset < 2048) {
return 2
}
if length < (1<<8)+4+4 {
return 3
}
if length < (1<<16)+(1<<8)+4 {
return 4
}
const maxRepeat = (1 << 24) - 1
length -= (1 << 16) - 4
left := 0
if length > maxRepeat {
left = length - maxRepeat + 4
length = maxRepeat - 4
}
if left > 0 {
return 5 + emitRepeatSize(offset, left)
}
return 5
}

View File

@ -0,0 +1,431 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package s2
import (
"math/bits"
)
// hash4 returns the hash of the lowest 4 bytes of u to fit in a hash table with h bits.
// Preferably h should be a constant and should always be <32.
func hash4(u uint64, h uint8) uint32 {
const prime4bytes = 2654435761
return (uint32(u) * prime4bytes) >> ((32 - h) & 31)
}
// hash5 returns the hash of the lowest 5 bytes of u to fit in a hash table with h bits.
// Preferably h should be a constant and should always be <64.
func hash5(u uint64, h uint8) uint32 {
const prime5bytes = 889523592379
return uint32(((u << (64 - 40)) * prime5bytes) >> ((64 - h) & 63))
}
// hash7 returns the hash of the lowest 7 bytes of u to fit in a hash table with h bits.
// Preferably h should be a constant and should always be <64.
func hash7(u uint64, h uint8) uint32 {
const prime7bytes = 58295818150454627
return uint32(((u << (64 - 56)) * prime7bytes) >> ((64 - h) & 63))
}
// hash8 returns the hash of u to fit in a hash table with h bits.
// Preferably h should be a constant and should always be <64.
func hash8(u uint64, h uint8) uint32 {
const prime8bytes = 0xcf1bbcdcb7a56463
return uint32((u * prime8bytes) >> ((64 - h) & 63))
}
// encodeBlockBetter encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBetterGo(dst, src []byte) (d int) {
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
if len(src) < minNonLiteralBlockSize {
return 0
}
// Initialize the hash tables.
const (
// Long hash matches.
lTableBits = 16
maxLTableSize = 1 << lTableBits
// Short hash matches.
sTableBits = 14
maxSTableSize = 1 << sTableBits
)
var lTable [maxLTableSize]uint32
var sTable [maxSTableSize]uint32
// Bail if we can't compress to at least this.
dstLimit := len(src) - len(src)>>5 - 6
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We initialize repeat to 0, so we never match on first attempt
repeat := 0
for {
candidateL := 0
nextS := 0
for {
// Next src position to check
nextS = s + (s-nextEmit)>>7 + 1
if nextS > sLimit {
goto emitRemainder
}
hashL := hash7(cv, lTableBits)
hashS := hash4(cv, sTableBits)
candidateL = int(lTable[hashL])
candidateS := int(sTable[hashS])
lTable[hashL] = uint32(s)
sTable[hashS] = uint32(s)
// Check repeat at offset checkRep.
const checkRep = 1
if false && uint32(cv>>(checkRep*8)) == load32(src, s-repeat+checkRep) {
base := s + checkRep
// Extend back
for i := base - repeat; base > nextEmit && i > 0 && src[i-1] == src[base-1]; {
i--
base--
}
d += emitLiteral(dst[d:], src[nextEmit:base])
// Extend forward
candidate := s - repeat + 4 + checkRep
s += 4 + checkRep
for s < len(src) {
if len(src)-s < 8 {
if src[s] == src[candidate] {
s++
candidate++
continue
}
break
}
if diff := load64(src, s) ^ load64(src, candidate); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidate += 8
}
if nextEmit > 0 {
// same as `add := emitCopy(dst[d:], repeat, s-base)` but skips storing offset.
d += emitRepeat(dst[d:], repeat, s-base)
} else {
// First match, cannot be repeat.
d += emitCopy(dst[d:], repeat, s-base)
}
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
if uint32(cv) == load32(src, candidateL) {
break
}
// Check our short candidate
if uint32(cv) == load32(src, candidateS) {
// Try a long candidate at s+1
hashL = hash7(cv>>8, lTableBits)
candidateL = int(lTable[hashL])
lTable[hashL] = uint32(s + 1)
if uint32(cv>>8) == load32(src, candidateL) {
s++
break
}
// Use our short candidate.
candidateL = candidateS
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards
for candidateL > 0 && s > nextEmit && src[candidateL-1] == src[s-1] {
candidateL--
s--
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
base := s
offset := base - candidateL
// Extend the 4-byte match as long as possible.
s += 4
candidateL += 4
for s < len(src) {
if len(src)-s < 8 {
if src[s] == src[candidateL] {
s++
candidateL++
continue
}
break
}
if diff := load64(src, s) ^ load64(src, candidateL); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidateL += 8
}
if offset > 65535 && s-base <= 5 && repeat != offset {
// Bail if the match is equal or worse to the encoding.
s = nextS + 1
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
d += emitLiteral(dst[d:], src[nextEmit:base])
if repeat == offset {
d += emitRepeat(dst[d:], offset, s-base)
} else {
d += emitCopy(dst[d:], offset, s-base)
repeat = offset
}
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Index match start+1 (long) and start+2 (short)
index0 := base + 1
// Index match end-2 (long) and end-1 (short)
index1 := s - 2
cv0 := load64(src, index0)
cv1 := load64(src, index1)
cv = load64(src, s)
lTable[hash7(cv0, lTableBits)] = uint32(index0)
lTable[hash7(cv0>>8, lTableBits)] = uint32(index0 + 1)
lTable[hash7(cv1, lTableBits)] = uint32(index1)
lTable[hash7(cv1>>8, lTableBits)] = uint32(index1 + 1)
sTable[hash4(cv0>>8, sTableBits)] = uint32(index0 + 1)
sTable[hash4(cv0>>16, sTableBits)] = uint32(index0 + 2)
sTable[hash4(cv1>>8, sTableBits)] = uint32(index1 + 1)
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}
// encodeBlockBetterSnappyGo encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlockBetterSnappyGo(dst, src []byte) (d int) {
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
if len(src) < minNonLiteralBlockSize {
return 0
}
// Initialize the hash tables.
const (
// Long hash matches.
lTableBits = 16
maxLTableSize = 1 << lTableBits
// Short hash matches.
sTableBits = 14
maxSTableSize = 1 << sTableBits
)
var lTable [maxLTableSize]uint32
var sTable [maxSTableSize]uint32
// Bail if we can't compress to at least this.
dstLimit := len(src) - len(src)>>5 - 6
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
cv := load64(src, s)
// We initialize repeat to 0, so we never match on first attempt
repeat := 0
const maxSkip = 100
for {
candidateL := 0
nextS := 0
for {
// Next src position to check
nextS = (s-nextEmit)>>7 + 1
if nextS > maxSkip {
nextS = s + maxSkip
} else {
nextS += s
}
if nextS > sLimit {
goto emitRemainder
}
hashL := hash7(cv, lTableBits)
hashS := hash4(cv, sTableBits)
candidateL = int(lTable[hashL])
candidateS := int(sTable[hashS])
lTable[hashL] = uint32(s)
sTable[hashS] = uint32(s)
if uint32(cv) == load32(src, candidateL) {
break
}
// Check our short candidate
if uint32(cv) == load32(src, candidateS) {
// Try a long candidate at s+1
hashL = hash7(cv>>8, lTableBits)
candidateL = int(lTable[hashL])
lTable[hashL] = uint32(s + 1)
if uint32(cv>>8) == load32(src, candidateL) {
s++
break
}
// Use our short candidate.
candidateL = candidateS
break
}
cv = load64(src, nextS)
s = nextS
}
// Extend backwards
for candidateL > 0 && s > nextEmit && src[candidateL-1] == src[s-1] {
candidateL--
s--
}
// Bail if we exceed the maximum size.
if d+(s-nextEmit) > dstLimit {
return 0
}
base := s
offset := base - candidateL
// Extend the 4-byte match as long as possible.
s += 4
candidateL += 4
for s < len(src) {
if len(src)-s < 8 {
if src[s] == src[candidateL] {
s++
candidateL++
continue
}
break
}
if diff := load64(src, s) ^ load64(src, candidateL); diff != 0 {
s += bits.TrailingZeros64(diff) >> 3
break
}
s += 8
candidateL += 8
}
if offset > 65535 && s-base <= 5 && repeat != offset {
// Bail if the match is equal or worse to the encoding.
s = nextS + 1
if s >= sLimit {
goto emitRemainder
}
cv = load64(src, s)
continue
}
d += emitLiteral(dst[d:], src[nextEmit:base])
d += emitCopyNoRepeat(dst[d:], offset, s-base)
repeat = offset
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
if d > dstLimit {
// Do we have space for more, if not bail.
return 0
}
// Index match start+1 (long) and start+2 (short)
index0 := base + 1
// Index match end-2 (long) and end-1 (short)
index1 := s - 2
cv0 := load64(src, index0)
cv1 := load64(src, index1)
cv = load64(src, s)
lTable[hash7(cv0, lTableBits)] = uint32(index0)
lTable[hash7(cv0>>8, lTableBits)] = uint32(index0 + 1)
lTable[hash7(cv1, lTableBits)] = uint32(index1)
lTable[hash7(cv1>>8, lTableBits)] = uint32(index1 + 1)
sTable[hash4(cv0>>8, sTableBits)] = uint32(index0 + 1)
sTable[hash4(cv0>>16, sTableBits)] = uint32(index0 + 2)
sTable[hash4(cv1>>8, sTableBits)] = uint32(index1 + 1)
}
emitRemainder:
if nextEmit < len(src) {
// Bail if we exceed the maximum size.
if d+len(src)-nextEmit > dstLimit {
return 0
}
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}

298
vendor/github.com/klauspost/compress/s2/encode_go.go generated vendored Normal file
View File

@ -0,0 +1,298 @@
//go:build !amd64 || appengine || !gc || noasm
// +build !amd64 appengine !gc noasm
package s2
import (
"math/bits"
)
// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src))
func encodeBlock(dst, src []byte) (d int) {
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeBlockGo(dst, src)
}
// encodeBlockBetter encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src))
func encodeBlockBetter(dst, src []byte) (d int) {
return encodeBlockBetterGo(dst, src)
}
// encodeBlockBetter encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src))
func encodeBlockBetterSnappy(dst, src []byte) (d int) {
return encodeBlockBetterSnappyGo(dst, src)
}
// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src))
func encodeBlockSnappy(dst, src []byte) (d int) {
if len(src) < minNonLiteralBlockSize {
return 0
}
return encodeBlockSnappyGo(dst, src)
}
// emitLiteral writes a literal chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 0 <= len(lit) && len(lit) <= math.MaxUint32
func emitLiteral(dst, lit []byte) int {
if len(lit) == 0 {
return 0
}
const num = 63<<2 | tagLiteral
i, n := 0, uint(len(lit)-1)
switch {
case n < 60:
dst[0] = uint8(n)<<2 | tagLiteral
i = 1
case n < 1<<8:
dst[1] = uint8(n)
dst[0] = 60<<2 | tagLiteral
i = 2
case n < 1<<16:
dst[2] = uint8(n >> 8)
dst[1] = uint8(n)
dst[0] = 61<<2 | tagLiteral
i = 3
case n < 1<<24:
dst[3] = uint8(n >> 16)
dst[2] = uint8(n >> 8)
dst[1] = uint8(n)
dst[0] = 62<<2 | tagLiteral
i = 4
default:
dst[4] = uint8(n >> 24)
dst[3] = uint8(n >> 16)
dst[2] = uint8(n >> 8)
dst[1] = uint8(n)
dst[0] = 63<<2 | tagLiteral
i = 5
}
return i + copy(dst[i:], lit)
}
// emitRepeat writes a repeat chunk and returns the number of bytes written.
// Length must be at least 4 and < 1<<24
func emitRepeat(dst []byte, offset, length int) int {
// Repeat offset, make length cheaper
length -= 4
if length <= 4 {
dst[0] = uint8(length)<<2 | tagCopy1
dst[1] = 0
return 2
}
if length < 8 && offset < 2048 {
// Encode WITH offset
dst[1] = uint8(offset)
dst[0] = uint8(offset>>8)<<5 | uint8(length)<<2 | tagCopy1
return 2
}
if length < (1<<8)+4 {
length -= 4
dst[2] = uint8(length)
dst[1] = 0
dst[0] = 5<<2 | tagCopy1
return 3
}
if length < (1<<16)+(1<<8) {
length -= 1 << 8
dst[3] = uint8(length >> 8)
dst[2] = uint8(length >> 0)
dst[1] = 0
dst[0] = 6<<2 | tagCopy1
return 4
}
const maxRepeat = (1 << 24) - 1
length -= 1 << 16
left := 0
if length > maxRepeat {
left = length - maxRepeat + 4
length = maxRepeat - 4
}
dst[4] = uint8(length >> 16)
dst[3] = uint8(length >> 8)
dst[2] = uint8(length >> 0)
dst[1] = 0
dst[0] = 7<<2 | tagCopy1
if left > 0 {
return 5 + emitRepeat(dst[5:], offset, left)
}
return 5
}
// emitCopy writes a copy chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= offset && offset <= math.MaxUint32
// 4 <= length && length <= 1 << 24
func emitCopy(dst []byte, offset, length int) int {
if offset >= 65536 {
i := 0
if length > 64 {
// Emit a length 64 copy, encoded as 5 bytes.
dst[4] = uint8(offset >> 24)
dst[3] = uint8(offset >> 16)
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = 63<<2 | tagCopy4
length -= 64
if length >= 4 {
// Emit remaining as repeats
return 5 + emitRepeat(dst[5:], offset, length)
}
i = 5
}
if length == 0 {
return i
}
// Emit a copy, offset encoded as 4 bytes.
dst[i+0] = uint8(length-1)<<2 | tagCopy4
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
dst[i+3] = uint8(offset >> 16)
dst[i+4] = uint8(offset >> 24)
return i + 5
}
// Offset no more than 2 bytes.
if length > 64 {
// Emit a length 60 copy, encoded as 3 bytes.
// Emit remaining as repeat value (minimum 4 bytes).
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = 59<<2 | tagCopy2
length -= 60
// Emit remaining as repeats, at least 4 bytes remain.
return 3 + emitRepeat(dst[3:], offset, length)
}
if length >= 12 || offset >= 2048 {
// Emit the remaining copy, encoded as 3 bytes.
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = uint8(length-1)<<2 | tagCopy2
return 3
}
// Emit the remaining copy, encoded as 2 bytes.
dst[1] = uint8(offset)
dst[0] = uint8(offset>>8)<<5 | uint8(length-4)<<2 | tagCopy1
return 2
}
// emitCopyNoRepeat writes a copy chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= offset && offset <= math.MaxUint32
// 4 <= length && length <= 1 << 24
func emitCopyNoRepeat(dst []byte, offset, length int) int {
if offset >= 65536 {
i := 0
if length > 64 {
// Emit a length 64 copy, encoded as 5 bytes.
dst[4] = uint8(offset >> 24)
dst[3] = uint8(offset >> 16)
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = 63<<2 | tagCopy4
length -= 64
if length >= 4 {
// Emit remaining as repeats
return 5 + emitCopyNoRepeat(dst[5:], offset, length)
}
i = 5
}
if length == 0 {
return i
}
// Emit a copy, offset encoded as 4 bytes.
dst[i+0] = uint8(length-1)<<2 | tagCopy4
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
dst[i+3] = uint8(offset >> 16)
dst[i+4] = uint8(offset >> 24)
return i + 5
}
// Offset no more than 2 bytes.
if length > 64 {
// Emit a length 60 copy, encoded as 3 bytes.
// Emit remaining as repeat value (minimum 4 bytes).
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = 59<<2 | tagCopy2
length -= 60
// Emit remaining as repeats, at least 4 bytes remain.
return 3 + emitCopyNoRepeat(dst[3:], offset, length)
}
if length >= 12 || offset >= 2048 {
// Emit the remaining copy, encoded as 3 bytes.
dst[2] = uint8(offset >> 8)
dst[1] = uint8(offset)
dst[0] = uint8(length-1)<<2 | tagCopy2
return 3
}
// Emit the remaining copy, encoded as 2 bytes.
dst[1] = uint8(offset)
dst[0] = uint8(offset>>8)<<5 | uint8(length-4)<<2 | tagCopy1
return 2
}
// matchLen returns how many bytes match in a and b
//
// It assumes that:
// len(a) <= len(b)
//
func matchLen(a []byte, b []byte) int {
b = b[:len(a)]
var checked int
if len(a) > 4 {
// Try 4 bytes first
if diff := load32(a, 0) ^ load32(b, 0); diff != 0 {
return bits.TrailingZeros32(diff) >> 3
}
// Switch to 8 byte matching.
checked = 4
a = a[4:]
b = b[4:]
for len(a) >= 8 {
b = b[:len(a)]
if diff := load64(a, 0) ^ load64(b, 0); diff != 0 {
return checked + (bits.TrailingZeros64(diff) >> 3)
}
checked += 8
a = a[8:]
b = b[8:]
}
}
b = b[:len(a)]
for i := range a {
if a[i] != b[i] {
return int(i) + checked
}
}
return len(a) + checked
}

View File

@ -0,0 +1,189 @@
// Code generated by command: go run gen.go -out ../encodeblock_amd64.s -stubs ../encodeblock_amd64.go -pkg=s2. DO NOT EDIT.
//go:build !appengine && !noasm && gc
// +build !appengine,!noasm,gc
package s2
// encodeBlockAsm encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4294967295 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBlockAsm(dst []byte, src []byte) int
// encodeBlockAsm4MB encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4194304 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBlockAsm4MB(dst []byte, src []byte) int
// encodeBlockAsm12B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 16383 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBlockAsm12B(dst []byte, src []byte) int
// encodeBlockAsm10B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4095 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBlockAsm10B(dst []byte, src []byte) int
// encodeBlockAsm8B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 511 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBlockAsm8B(dst []byte, src []byte) int
// encodeBetterBlockAsm encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4294967295 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBetterBlockAsm(dst []byte, src []byte) int
// encodeBetterBlockAsm4MB encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4194304 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBetterBlockAsm4MB(dst []byte, src []byte) int
// encodeBetterBlockAsm12B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 16383 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBetterBlockAsm12B(dst []byte, src []byte) int
// encodeBetterBlockAsm10B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4095 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBetterBlockAsm10B(dst []byte, src []byte) int
// encodeBetterBlockAsm8B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 511 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeBetterBlockAsm8B(dst []byte, src []byte) int
// encodeSnappyBlockAsm encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4294967295 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBlockAsm(dst []byte, src []byte) int
// encodeSnappyBlockAsm64K encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 65535 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBlockAsm64K(dst []byte, src []byte) int
// encodeSnappyBlockAsm12B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 16383 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBlockAsm12B(dst []byte, src []byte) int
// encodeSnappyBlockAsm10B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4095 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBlockAsm10B(dst []byte, src []byte) int
// encodeSnappyBlockAsm8B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 511 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBlockAsm8B(dst []byte, src []byte) int
// encodeSnappyBetterBlockAsm encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4294967295 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBetterBlockAsm(dst []byte, src []byte) int
// encodeSnappyBetterBlockAsm64K encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 65535 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBetterBlockAsm64K(dst []byte, src []byte) int
// encodeSnappyBetterBlockAsm12B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 16383 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBetterBlockAsm12B(dst []byte, src []byte) int
// encodeSnappyBetterBlockAsm10B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 4095 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBetterBlockAsm10B(dst []byte, src []byte) int
// encodeSnappyBetterBlockAsm8B encodes a non-empty src to a guaranteed-large-enough dst.
// Maximum input 511 bytes.
// It assumes that the varint-encoded length of the decompressed bytes has already been written.
//
//go:noescape
func encodeSnappyBetterBlockAsm8B(dst []byte, src []byte) int
// emitLiteral writes a literal chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes with margin of 0 bytes
// 0 <= len(lit) && len(lit) <= math.MaxUint32
//
//go:noescape
func emitLiteral(dst []byte, lit []byte) int
// emitRepeat writes a repeat chunk and returns the number of bytes written.
// Length must be at least 4 and < 1<<32
//
//go:noescape
func emitRepeat(dst []byte, offset int, length int) int
// emitCopy writes a copy chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= offset && offset <= math.MaxUint32
// 4 <= length && length <= 1 << 24
//
//go:noescape
func emitCopy(dst []byte, offset int, length int) int
// emitCopyNoRepeat writes a copy chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= offset && offset <= math.MaxUint32
// 4 <= length && length <= 1 << 24
//
//go:noescape
func emitCopyNoRepeat(dst []byte, offset int, length int) int
// matchLen returns how many bytes match in a and b
//
// It assumes that:
// len(a) <= len(b)
//
//go:noescape
func matchLen(a []byte, b []byte) int

File diff suppressed because it is too large Load Diff

139
vendor/github.com/klauspost/compress/s2/s2.go generated vendored Normal file
View File

@ -0,0 +1,139 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Copyright (c) 2019 Klaus Post. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package s2 implements the S2 compression format.
//
// S2 is an extension of Snappy. Similar to Snappy S2 is aimed for high throughput,
// which is why it features concurrent compression for bigger payloads.
//
// Decoding is compatible with Snappy compressed content,
// but content compressed with S2 cannot be decompressed by Snappy.
//
// For more information on Snappy/S2 differences see README in: https://github.com/klauspost/compress/tree/master/s2
//
// There are actually two S2 formats: block and stream. They are related,
// but different: trying to decompress block-compressed data as a S2 stream
// will fail, and vice versa. The block format is the Decode and Encode
// functions and the stream format is the Reader and Writer types.
//
// A "better" compression option is available. This will trade some compression
// speed
//
// The block format, the more common case, is used when the complete size (the
// number of bytes) of the original data is known upfront, at the time
// compression starts. The stream format, also known as the framing format, is
// for when that isn't always true.
//
// Blocks to not offer much data protection, so it is up to you to
// add data validation of decompressed blocks.
//
// Streams perform CRC validation of the decompressed data.
// Stream compression will also be performed on multiple CPU cores concurrently
// significantly improving throughput.
package s2
import (
"bytes"
"hash/crc32"
)
/*
Each encoded block begins with the varint-encoded length of the decoded data,
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
first byte of each chunk is broken into its 2 least and 6 most significant bits
called l and m: l ranges in [0, 4) and m ranges in [0, 64). l is the chunk tag.
Zero means a literal tag. All other values mean a copy tag.
For literal tags:
- If m < 60, the next 1 + m bytes are literal bytes.
- Otherwise, let n be the little-endian unsigned integer denoted by the next
m - 59 bytes. The next 1 + n bytes after that are literal bytes.
For copy tags, length bytes are copied from offset bytes ago, in the style of
Lempel-Ziv compression algorithms. In particular:
- For l == 1, the offset ranges in [0, 1<<11) and the length in [4, 12).
The length is 4 + the low 3 bits of m. The high 3 bits of m form bits 8-10
of the offset. The next byte is bits 0-7 of the offset.
- For l == 2, the offset ranges in [0, 1<<16) and the length in [1, 65).
The length is 1 + m. The offset is the little-endian unsigned integer
denoted by the next 2 bytes.
- For l == 3, the offset ranges in [0, 1<<32) and the length in
[1, 65). The length is 1 + m. The offset is the little-endian unsigned
integer denoted by the next 4 bytes.
*/
const (
tagLiteral = 0x00
tagCopy1 = 0x01
tagCopy2 = 0x02
tagCopy4 = 0x03
)
const (
checksumSize = 4
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicChunkSnappy = "\xff\x06\x00\x00" + magicBodySnappy
magicBodySnappy = "sNaPpY"
magicBody = "S2sTwO"
// maxBlockSize is the maximum size of the input to encodeBlock.
//
// For the framing format (Writer type instead of Encode function),
// this is the maximum uncompressed size of a block.
maxBlockSize = 4 << 20
// minBlockSize is the minimum size of block setting when creating a writer.
minBlockSize = 4 << 10
// Default block size
defaultBlockSize = 1 << 20
// maxSnappyBlockSize is the maximum snappy block size.
maxSnappyBlockSize = 1 << 16
obufHeaderLen = checksumSize + chunkHeaderSize
)
const (
chunkTypeCompressedData = 0x00
chunkTypeUncompressedData = 0x01
chunkTypePadding = 0xfe
chunkTypeStreamIdentifier = 0xff
)
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://github.com/google/snappy/blob/master/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return c>>15 | c<<17 + 0xa282ead8
}
// literalExtraSize returns the extra size of encoding n literals.
// n should be >= 0 and <= math.MaxUint32.
func literalExtraSize(n int64) int64 {
if n == 0 {
return 0
}
switch {
case n < 60:
return 1
case n < 1<<8:
return 2
case n < 1<<16:
return 3
case n < 1<<24:
return 4
default:
return 5
}
}
type byter interface {
Bytes() []byte
}
var _ byter = &bytes.Buffer{}

24
vendor/github.com/klauspost/cpuid/.gitignore generated vendored Normal file
View File

@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

46
vendor/github.com/klauspost/cpuid/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,46 @@
language: go
os:
- linux
- osx
- windows
arch:
- amd64
- arm64
go:
- 1.12.x
- 1.13.x
- 1.14.x
- master
script:
- go vet ./...
- go test -race ./...
- go test -tags=noasm ./...
stages:
- gofmt
- test
matrix:
allow_failures:
- go: 'master'
fast_finish: true
include:
- stage: gofmt
go: 1.14.x
os: linux
arch: amd64
script:
- diff <(gofmt -d .) <(printf "")
- diff <(gofmt -d ./private) <(printf "")
- go install github.com/klauspost/asmfmt/cmd/asmfmt
- diff <(asmfmt -d .) <(printf "")
- stage: i386
go: 1.14.x
os: linux
arch: amd64
script:
- GOOS=linux GOARCH=386 go test .

35
vendor/github.com/klauspost/cpuid/CONTRIBUTING.txt generated vendored Normal file
View File

@ -0,0 +1,35 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2015- Klaus Post & Contributors.
Email: klauspost@gmail.com
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

22
vendor/github.com/klauspost/cpuid/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Klaus Post
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

191
vendor/github.com/klauspost/cpuid/README.md generated vendored Normal file
View File

@ -0,0 +1,191 @@
# cpuid
Package cpuid provides information about the CPU running the current program.
CPU features are detected on startup, and kept for fast access through the life of the application.
Currently x86 / x64 (AMD64/i386) and ARM (ARM64) is supported, and no external C (cgo) code is used, which should make the library very easy to use.
You can access the CPU information by accessing the shared CPU variable of the cpuid library.
Package home: https://github.com/klauspost/cpuid
[![GoDoc][1]][2] [![Build Status][3]][4]
[1]: https://godoc.org/github.com/klauspost/cpuid?status.svg
[2]: https://godoc.org/github.com/klauspost/cpuid
[3]: https://travis-ci.org/klauspost/cpuid.svg?branch=master
[4]: https://travis-ci.org/klauspost/cpuid
# features
## x86 CPU Instructions
* **CMOV** (i686 CMOV)
* **NX** (NX (No-Execute) bit)
* **AMD3DNOW** (AMD 3DNOW)
* **AMD3DNOWEXT** (AMD 3DNowExt)
* **MMX** (standard MMX)
* **MMXEXT** (SSE integer functions or AMD MMX ext)
* **SSE** (SSE functions)
* **SSE2** (P4 SSE functions)
* **SSE3** (Prescott SSE3 functions)
* **SSSE3** (Conroe SSSE3 functions)
* **SSE4** (Penryn SSE4.1 functions)
* **SSE4A** (AMD Barcelona microarchitecture SSE4a instructions)
* **SSE42** (Nehalem SSE4.2 functions)
* **AVX** (AVX functions)
* **AVX2** (AVX2 functions)
* **FMA3** (Intel FMA 3)
* **FMA4** (Bulldozer FMA4 functions)
* **XOP** (Bulldozer XOP functions)
* **F16C** (Half-precision floating-point conversion)
* **BMI1** (Bit Manipulation Instruction Set 1)
* **BMI2** (Bit Manipulation Instruction Set 2)
* **TBM** (AMD Trailing Bit Manipulation)
* **LZCNT** (LZCNT instruction)
* **POPCNT** (POPCNT instruction)
* **AESNI** (Advanced Encryption Standard New Instructions)
* **CLMUL** (Carry-less Multiplication)
* **HTT** (Hyperthreading (enabled))
* **HLE** (Hardware Lock Elision)
* **RTM** (Restricted Transactional Memory)
* **RDRAND** (RDRAND instruction is available)
* **RDSEED** (RDSEED instruction is available)
* **ADX** (Intel ADX (Multi-Precision Add-Carry Instruction Extensions))
* **SHA** (Intel SHA Extensions)
* **AVX512F** (AVX-512 Foundation)
* **AVX512DQ** (AVX-512 Doubleword and Quadword Instructions)
* **AVX512IFMA** (AVX-512 Integer Fused Multiply-Add Instructions)
* **AVX512PF** (AVX-512 Prefetch Instructions)
* **AVX512ER** (AVX-512 Exponential and Reciprocal Instructions)
* **AVX512CD** (AVX-512 Conflict Detection Instructions)
* **AVX512BW** (AVX-512 Byte and Word Instructions)
* **AVX512VL** (AVX-512 Vector Length Extensions)
* **AVX512VBMI** (AVX-512 Vector Bit Manipulation Instructions)
* **AVX512VBMI2** (AVX-512 Vector Bit Manipulation Instructions, Version 2)
* **AVX512VNNI** (AVX-512 Vector Neural Network Instructions)
* **AVX512VPOPCNTDQ** (AVX-512 Vector Population Count Doubleword and Quadword)
* **GFNI** (Galois Field New Instructions)
* **VAES** (Vector AES)
* **AVX512BITALG** (AVX-512 Bit Algorithms)
* **VPCLMULQDQ** (Carry-Less Multiplication Quadword)
* **AVX512BF16** (AVX-512 BFLOAT16 Instructions)
* **AVX512VP2INTERSECT** (AVX-512 Intersect for D/Q)
* **MPX** (Intel MPX (Memory Protection Extensions))
* **ERMS** (Enhanced REP MOVSB/STOSB)
* **RDTSCP** (RDTSCP Instruction)
* **CX16** (CMPXCHG16B Instruction)
* **SGX** (Software Guard Extensions, with activation details)
* **VMX** (Virtual Machine Extensions)
## Performance
* **RDTSCP()** Returns current cycle count. Can be used for benchmarking.
* **SSE2SLOW** (SSE2 is supported, but usually not faster)
* **SSE3SLOW** (SSE3 is supported, but usually not faster)
* **ATOM** (Atom processor, some SSSE3 instructions are slower)
* **Cache line** (Probable size of a cache line).
* **L1, L2, L3 Cache size** on newer Intel/AMD CPUs.
## ARM CPU features
# ARM FEATURE DETECTION DISABLED!
See [#52](https://github.com/klauspost/cpuid/issues/52).
Currently only `arm64` platforms are implemented.
* **FP** Single-precision and double-precision floating point
* **ASIMD** Advanced SIMD
* **EVTSTRM** Generic timer
* **AES** AES instructions
* **PMULL** Polynomial Multiply instructions (PMULL/PMULL2)
* **SHA1** SHA-1 instructions (SHA1C, etc)
* **SHA2** SHA-2 instructions (SHA256H, etc)
* **CRC32** CRC32/CRC32C instructions
* **ATOMICS** Large System Extensions (LSE)
* **FPHP** Half-precision floating point
* **ASIMDHP** Advanced SIMD half-precision floating point
* **ARMCPUID** Some CPU ID registers readable at user-level
* **ASIMDRDM** Rounding Double Multiply Accumulate/Subtract (SQRDMLAH/SQRDMLSH)
* **JSCVT** Javascript-style double->int convert (FJCVTZS)
* **FCMA** Floating point complex number addition and multiplication
* **LRCPC** Weaker release consistency (LDAPR, etc)
* **DCPOP** Data cache clean to Point of Persistence (DC CVAP)
* **SHA3** SHA-3 instructions (EOR3, RAXI, XAR, BCAX)
* **SM3** SM3 instructions
* **SM4** SM4 instructions
* **ASIMDDP** SIMD Dot Product
* **SHA512** SHA512 instructions
* **SVE** Scalable Vector Extension
* **GPA** Generic Pointer Authentication
## Cpu Vendor/VM
* **Intel**
* **AMD**
* **VIA**
* **Transmeta**
* **NSC**
* **KVM** (Kernel-based Virtual Machine)
* **MSVM** (Microsoft Hyper-V or Windows Virtual PC)
* **VMware**
* **XenHVM**
* **Bhyve**
* **Hygon**
# installing
```go get github.com/klauspost/cpuid```
# example
```Go
package main
import (
"fmt"
"github.com/klauspost/cpuid"
)
func main() {
// Print basic CPU information:
fmt.Println("Name:", cpuid.CPU.BrandName)
fmt.Println("PhysicalCores:", cpuid.CPU.PhysicalCores)
fmt.Println("ThreadsPerCore:", cpuid.CPU.ThreadsPerCore)
fmt.Println("LogicalCores:", cpuid.CPU.LogicalCores)
fmt.Println("Family", cpuid.CPU.Family, "Model:", cpuid.CPU.Model)
fmt.Println("Features:", cpuid.CPU.Features)
fmt.Println("Cacheline bytes:", cpuid.CPU.CacheLine)
fmt.Println("L1 Data Cache:", cpuid.CPU.Cache.L1D, "bytes")
fmt.Println("L1 Instruction Cache:", cpuid.CPU.Cache.L1D, "bytes")
fmt.Println("L2 Cache:", cpuid.CPU.Cache.L2, "bytes")
fmt.Println("L3 Cache:", cpuid.CPU.Cache.L3, "bytes")
// Test if we have a specific feature:
if cpuid.CPU.SSE() {
fmt.Println("We have Streaming SIMD Extensions")
}
}
```
Sample output:
```
>go run main.go
Name: Intel(R) Core(TM) i5-2540M CPU @ 2.60GHz
PhysicalCores: 2
ThreadsPerCore: 2
LogicalCores: 4
Family 6 Model: 42
Features: CMOV,MMX,MMXEXT,SSE,SSE2,SSE3,SSSE3,SSE4.1,SSE4.2,AVX,AESNI,CLMUL
Cacheline bytes: 64
We have Streaming SIMD Extensions
```
# private package
In the "private" folder you can find an autogenerated version of the library you can include in your own packages.
For this purpose all exports are removed, and functions and constants are lowercased.
This is not a recommended way of using the library, but provided for convenience, if it is difficult for you to use external packages.
# license
This code is published under an MIT license. See LICENSE file for more information.

1504
vendor/github.com/klauspost/cpuid/cpuid.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

42
vendor/github.com/klauspost/cpuid/cpuid_386.s generated vendored Normal file
View File

@ -0,0 +1,42 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build 386,!gccgo,!noasm,!appengine
// func asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)
TEXT ·asmCpuid(SB), 7, $0
XORL CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+4(FP)
MOVL BX, ebx+8(FP)
MOVL CX, ecx+12(FP)
MOVL DX, edx+16(FP)
RET
// func asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)
TEXT ·asmCpuidex(SB), 7, $0
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func xgetbv(index uint32) (eax, edx uint32)
TEXT ·asmXgetbv(SB), 7, $0
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+4(FP)
MOVL DX, edx+8(FP)
RET
// func asmRdtscpAsm() (eax, ebx, ecx, edx uint32)
TEXT ·asmRdtscpAsm(SB), 7, $0
BYTE $0x0F; BYTE $0x01; BYTE $0xF9 // RDTSCP
MOVL AX, eax+0(FP)
MOVL BX, ebx+4(FP)
MOVL CX, ecx+8(FP)
MOVL DX, edx+12(FP)
RET

42
vendor/github.com/klauspost/cpuid/cpuid_amd64.s generated vendored Normal file
View File

@ -0,0 +1,42 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build amd64,!gccgo,!noasm,!appengine
// func asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)
TEXT ·asmCpuid(SB), 7, $0
XORQ CX, CX
MOVL op+0(FP), AX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)
TEXT ·asmCpuidex(SB), 7, $0
MOVL op+0(FP), AX
MOVL op2+4(FP), CX
CPUID
MOVL AX, eax+8(FP)
MOVL BX, ebx+12(FP)
MOVL CX, ecx+16(FP)
MOVL DX, edx+20(FP)
RET
// func asmXgetbv(index uint32) (eax, edx uint32)
TEXT ·asmXgetbv(SB), 7, $0
MOVL index+0(FP), CX
BYTE $0x0f; BYTE $0x01; BYTE $0xd0 // XGETBV
MOVL AX, eax+8(FP)
MOVL DX, edx+12(FP)
RET
// func asmRdtscpAsm() (eax, ebx, ecx, edx uint32)
TEXT ·asmRdtscpAsm(SB), 7, $0
BYTE $0x0F; BYTE $0x01; BYTE $0xF9 // RDTSCP
MOVL AX, eax+0(FP)
MOVL BX, ebx+4(FP)
MOVL CX, ecx+8(FP)
MOVL DX, edx+12(FP)
RET

26
vendor/github.com/klauspost/cpuid/cpuid_arm64.s generated vendored Normal file
View File

@ -0,0 +1,26 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build arm64,!gccgo
// See https://www.kernel.org/doc/Documentation/arm64/cpu-feature-registers.txt
// func getMidr
TEXT ·getMidr(SB), 7, $0
WORD $0xd5380000 // mrs x0, midr_el1 /* Main ID Register */
MOVD R0, midr+0(FP)
RET
// func getProcFeatures
TEXT ·getProcFeatures(SB), 7, $0
WORD $0xd5380400 // mrs x0, id_aa64pfr0_el1 /* Processor Feature Register 0 */
MOVD R0, procFeatures+0(FP)
RET
// func getInstAttributes
TEXT ·getInstAttributes(SB), 7, $0
WORD $0xd5380600 // mrs x0, id_aa64isar0_el1 /* Instruction Set Attribute Register 0 */
WORD $0xd5380621 // mrs x1, id_aa64isar1_el1 /* Instruction Set Attribute Register 1 */
MOVD R0, instAttrReg0+0(FP)
MOVD R1, instAttrReg1+8(FP)
RET

219
vendor/github.com/klauspost/cpuid/detect_arm64.go generated vendored Normal file
View File

@ -0,0 +1,219 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build arm64,!gccgo,!noasm,!appengine
package cpuid
func getMidr() (midr uint64)
func getProcFeatures() (procFeatures uint64)
func getInstAttributes() (instAttrReg0, instAttrReg1 uint64)
func initCPU() {
cpuid = func(uint32) (a, b, c, d uint32) { return 0, 0, 0, 0 }
cpuidex = func(x, y uint32) (a, b, c, d uint32) { return 0, 0, 0, 0 }
xgetbv = func(uint32) (a, b uint32) { return 0, 0 }
rdtscpAsm = func() (a, b, c, d uint32) { return 0, 0, 0, 0 }
}
func addInfo(c *CPUInfo) {
// ARM64 disabled for now.
if true {
return
}
// midr := getMidr()
// MIDR_EL1 - Main ID Register
// x--------------------------------------------------x
// | Name | bits | visible |
// |--------------------------------------------------|
// | Implementer | [31-24] | y |
// |--------------------------------------------------|
// | Variant | [23-20] | y |
// |--------------------------------------------------|
// | Architecture | [19-16] | y |
// |--------------------------------------------------|
// | PartNum | [15-4] | y |
// |--------------------------------------------------|
// | Revision | [3-0] | y |
// x--------------------------------------------------x
// fmt.Printf(" implementer: 0x%02x\n", (midr>>24)&0xff)
// fmt.Printf(" variant: 0x%01x\n", (midr>>20)&0xf)
// fmt.Printf("architecture: 0x%01x\n", (midr>>16)&0xf)
// fmt.Printf(" part num: 0x%03x\n", (midr>>4)&0xfff)
// fmt.Printf(" revision: 0x%01x\n", (midr>>0)&0xf)
procFeatures := getProcFeatures()
// ID_AA64PFR0_EL1 - Processor Feature Register 0
// x--------------------------------------------------x
// | Name | bits | visible |
// |--------------------------------------------------|
// | DIT | [51-48] | y |
// |--------------------------------------------------|
// | SVE | [35-32] | y |
// |--------------------------------------------------|
// | GIC | [27-24] | n |
// |--------------------------------------------------|
// | AdvSIMD | [23-20] | y |
// |--------------------------------------------------|
// | FP | [19-16] | y |
// |--------------------------------------------------|
// | EL3 | [15-12] | n |
// |--------------------------------------------------|
// | EL2 | [11-8] | n |
// |--------------------------------------------------|
// | EL1 | [7-4] | n |
// |--------------------------------------------------|
// | EL0 | [3-0] | n |
// x--------------------------------------------------x
var f ArmFlags
// if procFeatures&(0xf<<48) != 0 {
// fmt.Println("DIT")
// }
if procFeatures&(0xf<<32) != 0 {
f |= SVE
}
if procFeatures&(0xf<<20) != 15<<20 {
f |= ASIMD
if procFeatures&(0xf<<20) == 1<<20 {
// https://developer.arm.com/docs/ddi0595/b/aarch64-system-registers/id_aa64pfr0_el1
// 0b0001 --> As for 0b0000, and also includes support for half-precision floating-point arithmetic.
f |= FPHP
f |= ASIMDHP
}
}
if procFeatures&(0xf<<16) != 0 {
f |= FP
}
instAttrReg0, instAttrReg1 := getInstAttributes()
// https://developer.arm.com/docs/ddi0595/b/aarch64-system-registers/id_aa64isar0_el1
//
// ID_AA64ISAR0_EL1 - Instruction Set Attribute Register 0
// x--------------------------------------------------x
// | Name | bits | visible |
// |--------------------------------------------------|
// | TS | [55-52] | y |
// |--------------------------------------------------|
// | FHM | [51-48] | y |
// |--------------------------------------------------|
// | DP | [47-44] | y |
// |--------------------------------------------------|
// | SM4 | [43-40] | y |
// |--------------------------------------------------|
// | SM3 | [39-36] | y |
// |--------------------------------------------------|
// | SHA3 | [35-32] | y |
// |--------------------------------------------------|
// | RDM | [31-28] | y |
// |--------------------------------------------------|
// | ATOMICS | [23-20] | y |
// |--------------------------------------------------|
// | CRC32 | [19-16] | y |
// |--------------------------------------------------|
// | SHA2 | [15-12] | y |
// |--------------------------------------------------|
// | SHA1 | [11-8] | y |
// |--------------------------------------------------|
// | AES | [7-4] | y |
// x--------------------------------------------------x
// if instAttrReg0&(0xf<<52) != 0 {
// fmt.Println("TS")
// }
// if instAttrReg0&(0xf<<48) != 0 {
// fmt.Println("FHM")
// }
if instAttrReg0&(0xf<<44) != 0 {
f |= ASIMDDP
}
if instAttrReg0&(0xf<<40) != 0 {
f |= SM4
}
if instAttrReg0&(0xf<<36) != 0 {
f |= SM3
}
if instAttrReg0&(0xf<<32) != 0 {
f |= SHA3
}
if instAttrReg0&(0xf<<28) != 0 {
f |= ASIMDRDM
}
if instAttrReg0&(0xf<<20) != 0 {
f |= ATOMICS
}
if instAttrReg0&(0xf<<16) != 0 {
f |= CRC32
}
if instAttrReg0&(0xf<<12) != 0 {
f |= SHA2
}
if instAttrReg0&(0xf<<12) == 2<<12 {
// https://developer.arm.com/docs/ddi0595/b/aarch64-system-registers/id_aa64isar0_el1
// 0b0010 --> As 0b0001, plus SHA512H, SHA512H2, SHA512SU0, and SHA512SU1 instructions implemented.
f |= SHA512
}
if instAttrReg0&(0xf<<8) != 0 {
f |= SHA1
}
if instAttrReg0&(0xf<<4) != 0 {
f |= AES
}
if instAttrReg0&(0xf<<4) == 2<<4 {
// https://developer.arm.com/docs/ddi0595/b/aarch64-system-registers/id_aa64isar0_el1
// 0b0010 --> As for 0b0001, plus PMULL/PMULL2 instructions operating on 64-bit data quantities.
f |= PMULL
}
// https://developer.arm.com/docs/ddi0595/b/aarch64-system-registers/id_aa64isar1_el1
//
// ID_AA64ISAR1_EL1 - Instruction set attribute register 1
// x--------------------------------------------------x
// | Name | bits | visible |
// |--------------------------------------------------|
// | GPI | [31-28] | y |
// |--------------------------------------------------|
// | GPA | [27-24] | y |
// |--------------------------------------------------|
// | LRCPC | [23-20] | y |
// |--------------------------------------------------|
// | FCMA | [19-16] | y |
// |--------------------------------------------------|
// | JSCVT | [15-12] | y |
// |--------------------------------------------------|
// | API | [11-8] | y |
// |--------------------------------------------------|
// | APA | [7-4] | y |
// |--------------------------------------------------|
// | DPB | [3-0] | y |
// x--------------------------------------------------x
// if instAttrReg1&(0xf<<28) != 0 {
// fmt.Println("GPI")
// }
if instAttrReg1&(0xf<<28) != 24 {
f |= GPA
}
if instAttrReg1&(0xf<<20) != 0 {
f |= LRCPC
}
if instAttrReg1&(0xf<<16) != 0 {
f |= FCMA
}
if instAttrReg1&(0xf<<12) != 0 {
f |= JSCVT
}
// if instAttrReg1&(0xf<<8) != 0 {
// fmt.Println("API")
// }
// if instAttrReg1&(0xf<<4) != 0 {
// fmt.Println("APA")
// }
if instAttrReg1&(0xf<<0) != 0 {
f |= DCPOP
}
c.Arm = f
}

33
vendor/github.com/klauspost/cpuid/detect_intel.go generated vendored Normal file
View File

@ -0,0 +1,33 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build 386,!gccgo,!noasm amd64,!gccgo,!noasm,!appengine
package cpuid
func asmCpuid(op uint32) (eax, ebx, ecx, edx uint32)
func asmCpuidex(op, op2 uint32) (eax, ebx, ecx, edx uint32)
func asmXgetbv(index uint32) (eax, edx uint32)
func asmRdtscpAsm() (eax, ebx, ecx, edx uint32)
func initCPU() {
cpuid = asmCpuid
cpuidex = asmCpuidex
xgetbv = asmXgetbv
rdtscpAsm = asmRdtscpAsm
}
func addInfo(c *CPUInfo) {
c.maxFunc = maxFunctionID()
c.maxExFunc = maxExtendedFunction()
c.BrandName = brandName()
c.CacheLine = cacheLine()
c.Family, c.Model = familyModel()
c.Features = support()
c.SGX = hasSGX(c.Features&SGX != 0, c.Features&SGXLC != 0)
c.ThreadsPerCore = threadsPerCore()
c.LogicalCores = logicalCores()
c.PhysicalCores = physicalCores()
c.VendorID, c.VendorString = vendorID()
c.Hz = hertz(c.BrandName)
c.cacheSize()
}

14
vendor/github.com/klauspost/cpuid/detect_ref.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
// Copyright (c) 2015 Klaus Post, released under MIT License. See LICENSE file.
//+build !amd64,!386,!arm64 gccgo noasm appengine
package cpuid
func initCPU() {
cpuid = func(uint32) (a, b, c, d uint32) { return 0, 0, 0, 0 }
cpuidex = func(x, y uint32) (a, b, c, d uint32) { return 0, 0, 0, 0 }
xgetbv = func(uint32) (a, b uint32) { return 0, 0 }
rdtscpAsm = func() (a, b, c, d uint32) { return 0, 0, 0, 0 }
}
func addInfo(info *CPUInfo) {}

202
vendor/github.com/minio/md5-simd/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

196
vendor/github.com/minio/md5-simd/README.md generated vendored Normal file
View File

@ -0,0 +1,196 @@
# md5-simd
This is a SIMD accelerated MD5 package, allowing up to either 8 (AVX2) or 16 (AVX512) independent MD5 sums to be calculated on a single CPU core.
It was originally based on the [md5vec](https://github.com/igneous-systems/md5vec) repository by Igneous Systems, but has been made more flexible by amongst others supporting different message sizes per lane and adding AVX512.
`md5-simd` integrates a similar mechanism as described in [minio/sha256-simd](https://github.com/minio/sha256-simd#support-for-avx512) for making it easy for clients to take advantages of the parallel nature of the MD5 calculation. This will result in reduced overall CPU load.
It is important to understand that `md5-simd` **does not speed up** a single threaded MD5 hash sum.
Rather it allows multiple __independent__ MD5 sums to be computed in parallel on the same CPU core,
thereby making more efficient usage of the computing resources.
## Usage
[![Documentation](https://godoc.org/github.com/minio/md5-simd?status.svg)](https://pkg.go.dev/github.com/minio/md5-simd?tab=doc)
In order to use `md5-simd`, you must first create an `Server` which can be
used to instantiate one or more objects for MD5 hashing.
These objects conform to the regular [`hash.Hash`](https://pkg.go.dev/hash?tab=doc#Hash) interface
and as such the normal Write/Reset/Sum functionality works as expected.
As an example:
```
// Create server
server := md5simd.NewServer()
defer server.Close()
// Create hashing object (conforming to hash.Hash)
md5Hash := server.NewHash()
defer md5Hash.Close()
// Write one (or more) blocks
md5Hash.Write(block)
// Return digest
digest := md5Hash.Sum([]byte{})
```
To keep performance both a [Server](https://pkg.go.dev/github.com/minio/md5-simd?tab=doc#Server)
and individual [Hasher](https://pkg.go.dev/github.com/minio/md5-simd?tab=doc#Hasher) should
be closed using the `Close()` function when no longer needed.
A Hasher can efficiently be re-used by using [`Reset()`](https://pkg.go.dev/hash?tab=doc#Hash) functionality.
In case your system does not support the instructions required it will fall back to using `crypto/md5` for hashing.
## Limitations
As explained above `md5-simd` does not speed up an individual MD5 hash sum computation,
unless some hierarchical tree construct is used but this will result in different outcomes.
Running a single hash on a server results in approximately half the throughput.
Instead, it allows running multiple MD5 calculations in parallel on a single CPU core.
This can be beneficial in e.g. multi-threaded server applications where many go-routines
are dealing with many requests and multiple MD5 calculations can be packed/scheduled for parallel execution on a single core.
This will result in a lower overall CPU usage as compared to using the standard `crypto/md5`
functionality where each MD5 hash computation will consume a single thread (core).
It is best to test and measure the overall CPU usage in a representative usage scenario in your application
to get an overall understanding of the benefits of `md5-simd` as compared to `crypto/md5`, ideally under heavy CPU load.
Also note that `md5-simd` is best meant to work with large objects,
so if your application only hashes small objects of a few kilobytes
you may be better of by using `crypto/md5`.
## Performance
For the best performance writes should be a multiple of 64 bytes, ideally a multiple of 32KB.
To help with that a [`buffered := bufio.NewWriterSize(hasher, 32<<10)`](https://golang.org/pkg/bufio/#NewWriterSize)
can be inserted if you are unsure of the sizes of the writes.
Remember to [flush](https://golang.org/pkg/bufio/#Writer.Flush) `buffered` before reading the hash.
A single 'server' can process 16 streams concurrently with 1 core (AVX-512) or 2 cores (AVX2).
In situations where it is likely that more than 16 streams are fully loaded it may be beneficial
to use multiple servers.
The following chart compares the multi-core performance between `crypto/md5` vs the AVX2 vs the AVX512 code:
![md5-performance-overview](chart/Multi-core-MD5-Aggregated-Hashing-Performance.png)
Compared to `crypto/md5`, the AVX2 version is up to 4x faster:
```
$ benchcmp crypto-md5.txt avx2.txt
benchmark old MB/s new MB/s speedup
BenchmarkParallel/32KB-4 2229.22 7370.50 3.31x
BenchmarkParallel/64KB-4 2233.61 8248.46 3.69x
BenchmarkParallel/128KB-4 2235.43 8660.74 3.87x
BenchmarkParallel/256KB-4 2236.39 8863.87 3.96x
BenchmarkParallel/512KB-4 2238.05 8985.39 4.01x
BenchmarkParallel/1MB-4 2233.56 9042.62 4.05x
BenchmarkParallel/2MB-4 2224.11 9014.46 4.05x
BenchmarkParallel/4MB-4 2199.78 8993.61 4.09x
BenchmarkParallel/8MB-4 2182.48 8748.22 4.01x
```
Compared to `crypto/md5`, the AVX512 is up to 8x faster (for larger block sizes):
```
$ benchcmp crypto-md5.txt avx512.txt
benchmark old MB/s new MB/s speedup
BenchmarkParallel/32KB-4 2229.22 11605.78 5.21x
BenchmarkParallel/64KB-4 2233.61 14329.65 6.42x
BenchmarkParallel/128KB-4 2235.43 16166.39 7.23x
BenchmarkParallel/256KB-4 2236.39 15570.09 6.96x
BenchmarkParallel/512KB-4 2238.05 16705.83 7.46x
BenchmarkParallel/1MB-4 2233.56 16941.95 7.59x
BenchmarkParallel/2MB-4 2224.11 17136.01 7.70x
BenchmarkParallel/4MB-4 2199.78 17218.61 7.83x
BenchmarkParallel/8MB-4 2182.48 17252.88 7.91x
```
These measurements were performed on AWS EC2 instance of type `c5.xlarge` equipped with a Xeon Platinum 8124M CPU at 3.0 GHz.
## Operation
To make operation as easy as possible there is a “Server” coordinating everything. The server keeps track of individual hash states and updates them as new data comes in. This can be visualized as follows:
![server-architecture](chart/server-architecture.png)
The data is sent to the server from each hash input in blocks of up to 32KB per round. In our testing we found this to be the block size that yielded the best results.
Whenever there is data available the server will collect data for up to 16 hashes and process all 16 lanes in parallel. This means that if 16 hashes have data available all the lanes will be filled. However since that may not be the case, the server will fill less lanes and do a round anyway. Lanes can also be partially filled if less than 32KB of data is written.
![server-lanes-example](chart/server-lanes-example.png)
In this example 4 lanes are fully filled and 2 lanes are partially filled. In this case the black areas will simply be masked out from the results and ignored. This is also why calculating a single hash on a server will not result in any speedup and hash writes should be a multiple of 32KB for the best performance.
For AVX512 all 16 calculations will be done on a single core, on AVX2 on 2 cores if there is data for more than 8 lanes.
So for optimal usage there should be data available for all 16 hashes. It may be perfectly reasonable to use more than 16 concurrent hashes.
## Design & Tech
md5-simd has both an AVX2 (8-lane parallel), and an AVX512 (16-lane parallel version) algorithm to accelerate the computation with the following function definitions:
```
//go:noescape
func block8(state *uint32, base uintptr, bufs *int32, cache *byte, n int)
//go:noescape
func block16(state *uint32, ptrs *int64, mask uint64, n int)
```
The AVX2 version is based on the [md5vec](https://github.com/igneous-systems/md5vec) repository and is essentially unchanged except for minor (cosmetic) changes.
The AVX512 version is derived from the AVX2 version but adds some further optimizations and simplifications.
### Caching in upper ZMM registers
The AVX2 version passes in a `cache8` block of memory (about 0.5 KB) for temporary storage of intermediate results during `ROUND1` which are subsequently used during `ROUND2` through to `ROUND4`.
Since AVX512 has double the amount of registers (32 ZMM registers as compared to 16 YMM registers), it is possible to use the upper 16 ZMM registers for keeping the intermediate states on the CPU. As such, there is no need to pass in a corresponding `cache16` into the AVX512 block function.
### Direct loading using 64-bit pointers
The AVX2 uses the `VPGATHERDD` instruction (for YMM) to do a parallel load of 8 lanes using (8 independent) 32-bit offets. Since there is no control over how the 8 slices that are passed into the (Golang) `blockMd5` function are laid out into memory, it is not possible to derive a "base" address and corresponding offsets (all within 32-bits) for all 8 slices.
As such the AVX2 version uses an interim buffer to collect the byte slices to be hashed from all 8 inut slices and passed this buffer along with (fixed) 32-bit offsets into the assembly code.
For the AVX512 version this interim buffer is not needed since the AVX512 code uses a pair of `VPGATHERQD` instructions to directly dereference 64-bit pointers (from a base register address that is initialized to zero).
Note that two load (gather) instructions are needed because the AVX512 version processes 16-lanes in parallel, requiring 16 times 64-bit = 1024 bits in total to be loaded. A simple `VALIGND` and `VPORD` are subsequently used to merge the lower and upper halves together into a single ZMM register (that contains 16 lanes of 32-bit DWORDS).
### Masking support
Due to the fact that pointers are passed directly from the Golang slices, we need to protect against NULL pointers.
For this a 16-bit mask is passed in the AVX512 assembly code which is used during the `VPGATHERQD` instructions to mask out lanes that could otherwise result in segment violations.
### Minor optimizations
The `roll` macro (three instructions on AVX2) is no longer needed for AVX512 and is replaced by a single `VPROLD` instruction.
Also several logical operations from the various ROUNDS of the AVX2 version could be combined into a single instruction using ternary logic (with the `VPTERMLOGD` instruction), resulting in a further simplification and speed-up.
## Low level block function performance
The benchmark below shows the (single thread) maximum performance of the `block()` function for AVX2 (having 8 lanes) and AVX512 (having 16 lanes). Also the baseline single-core performance from the standard `crypto/md5` package is shown for comparison.
```
BenchmarkCryptoMd5-4 687.66 MB/s 0 B/op 0 allocs/op
BenchmarkBlock8-4 4144.80 MB/s 0 B/op 0 allocs/op
BenchmarkBlock16-4 8228.88 MB/s 0 B/op 0 allocs/op
```
## License
`md5-simd` is released under the Apache License v2.0. You can find the complete text in the file LICENSE.
## Contributing
Contributions are welcome, please send PRs for any enhancements.

132
vendor/github.com/minio/md5-simd/block-generic.go generated vendored Normal file
View File

@ -0,0 +1,132 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Code generated by go run gen.go -output md5block.go; DO NOT EDIT.
package md5simd
import (
"encoding/binary"
"math/bits"
)
type digest struct {
s [4]uint32
x [BlockSize]byte
nx int
len uint64
}
func blockGeneric(dig *digest, p []byte) {
// load state
a, b, c, d := dig.s[0], dig.s[1], dig.s[2], dig.s[3]
for i := 0; i <= len(p)-BlockSize; i += BlockSize {
// eliminate bounds checks on p
q := p[i:]
q = q[:BlockSize:BlockSize]
// save current state
aa, bb, cc, dd := a, b, c, d
// load input block
x0 := binary.LittleEndian.Uint32(q[4*0x0:])
x1 := binary.LittleEndian.Uint32(q[4*0x1:])
x2 := binary.LittleEndian.Uint32(q[4*0x2:])
x3 := binary.LittleEndian.Uint32(q[4*0x3:])
x4 := binary.LittleEndian.Uint32(q[4*0x4:])
x5 := binary.LittleEndian.Uint32(q[4*0x5:])
x6 := binary.LittleEndian.Uint32(q[4*0x6:])
x7 := binary.LittleEndian.Uint32(q[4*0x7:])
x8 := binary.LittleEndian.Uint32(q[4*0x8:])
x9 := binary.LittleEndian.Uint32(q[4*0x9:])
xa := binary.LittleEndian.Uint32(q[4*0xa:])
xb := binary.LittleEndian.Uint32(q[4*0xb:])
xc := binary.LittleEndian.Uint32(q[4*0xc:])
xd := binary.LittleEndian.Uint32(q[4*0xd:])
xe := binary.LittleEndian.Uint32(q[4*0xe:])
xf := binary.LittleEndian.Uint32(q[4*0xf:])
// round 1
a = b + bits.RotateLeft32((((c^d)&b)^d)+a+x0+0xd76aa478, 7)
d = a + bits.RotateLeft32((((b^c)&a)^c)+d+x1+0xe8c7b756, 12)
c = d + bits.RotateLeft32((((a^b)&d)^b)+c+x2+0x242070db, 17)
b = c + bits.RotateLeft32((((d^a)&c)^a)+b+x3+0xc1bdceee, 22)
a = b + bits.RotateLeft32((((c^d)&b)^d)+a+x4+0xf57c0faf, 7)
d = a + bits.RotateLeft32((((b^c)&a)^c)+d+x5+0x4787c62a, 12)
c = d + bits.RotateLeft32((((a^b)&d)^b)+c+x6+0xa8304613, 17)
b = c + bits.RotateLeft32((((d^a)&c)^a)+b+x7+0xfd469501, 22)
a = b + bits.RotateLeft32((((c^d)&b)^d)+a+x8+0x698098d8, 7)
d = a + bits.RotateLeft32((((b^c)&a)^c)+d+x9+0x8b44f7af, 12)
c = d + bits.RotateLeft32((((a^b)&d)^b)+c+xa+0xffff5bb1, 17)
b = c + bits.RotateLeft32((((d^a)&c)^a)+b+xb+0x895cd7be, 22)
a = b + bits.RotateLeft32((((c^d)&b)^d)+a+xc+0x6b901122, 7)
d = a + bits.RotateLeft32((((b^c)&a)^c)+d+xd+0xfd987193, 12)
c = d + bits.RotateLeft32((((a^b)&d)^b)+c+xe+0xa679438e, 17)
b = c + bits.RotateLeft32((((d^a)&c)^a)+b+xf+0x49b40821, 22)
// round 2
a = b + bits.RotateLeft32((((b^c)&d)^c)+a+x1+0xf61e2562, 5)
d = a + bits.RotateLeft32((((a^b)&c)^b)+d+x6+0xc040b340, 9)
c = d + bits.RotateLeft32((((d^a)&b)^a)+c+xb+0x265e5a51, 14)
b = c + bits.RotateLeft32((((c^d)&a)^d)+b+x0+0xe9b6c7aa, 20)
a = b + bits.RotateLeft32((((b^c)&d)^c)+a+x5+0xd62f105d, 5)
d = a + bits.RotateLeft32((((a^b)&c)^b)+d+xa+0x02441453, 9)
c = d + bits.RotateLeft32((((d^a)&b)^a)+c+xf+0xd8a1e681, 14)
b = c + bits.RotateLeft32((((c^d)&a)^d)+b+x4+0xe7d3fbc8, 20)
a = b + bits.RotateLeft32((((b^c)&d)^c)+a+x9+0x21e1cde6, 5)
d = a + bits.RotateLeft32((((a^b)&c)^b)+d+xe+0xc33707d6, 9)
c = d + bits.RotateLeft32((((d^a)&b)^a)+c+x3+0xf4d50d87, 14)
b = c + bits.RotateLeft32((((c^d)&a)^d)+b+x8+0x455a14ed, 20)
a = b + bits.RotateLeft32((((b^c)&d)^c)+a+xd+0xa9e3e905, 5)
d = a + bits.RotateLeft32((((a^b)&c)^b)+d+x2+0xfcefa3f8, 9)
c = d + bits.RotateLeft32((((d^a)&b)^a)+c+x7+0x676f02d9, 14)
b = c + bits.RotateLeft32((((c^d)&a)^d)+b+xc+0x8d2a4c8a, 20)
// round 3
a = b + bits.RotateLeft32((b^c^d)+a+x5+0xfffa3942, 4)
d = a + bits.RotateLeft32((a^b^c)+d+x8+0x8771f681, 11)
c = d + bits.RotateLeft32((d^a^b)+c+xb+0x6d9d6122, 16)
b = c + bits.RotateLeft32((c^d^a)+b+xe+0xfde5380c, 23)
a = b + bits.RotateLeft32((b^c^d)+a+x1+0xa4beea44, 4)
d = a + bits.RotateLeft32((a^b^c)+d+x4+0x4bdecfa9, 11)
c = d + bits.RotateLeft32((d^a^b)+c+x7+0xf6bb4b60, 16)
b = c + bits.RotateLeft32((c^d^a)+b+xa+0xbebfbc70, 23)
a = b + bits.RotateLeft32((b^c^d)+a+xd+0x289b7ec6, 4)
d = a + bits.RotateLeft32((a^b^c)+d+x0+0xeaa127fa, 11)
c = d + bits.RotateLeft32((d^a^b)+c+x3+0xd4ef3085, 16)
b = c + bits.RotateLeft32((c^d^a)+b+x6+0x04881d05, 23)
a = b + bits.RotateLeft32((b^c^d)+a+x9+0xd9d4d039, 4)
d = a + bits.RotateLeft32((a^b^c)+d+xc+0xe6db99e5, 11)
c = d + bits.RotateLeft32((d^a^b)+c+xf+0x1fa27cf8, 16)
b = c + bits.RotateLeft32((c^d^a)+b+x2+0xc4ac5665, 23)
// round 4
a = b + bits.RotateLeft32((c^(b|^d))+a+x0+0xf4292244, 6)
d = a + bits.RotateLeft32((b^(a|^c))+d+x7+0x432aff97, 10)
c = d + bits.RotateLeft32((a^(d|^b))+c+xe+0xab9423a7, 15)
b = c + bits.RotateLeft32((d^(c|^a))+b+x5+0xfc93a039, 21)
a = b + bits.RotateLeft32((c^(b|^d))+a+xc+0x655b59c3, 6)
d = a + bits.RotateLeft32((b^(a|^c))+d+x3+0x8f0ccc92, 10)
c = d + bits.RotateLeft32((a^(d|^b))+c+xa+0xffeff47d, 15)
b = c + bits.RotateLeft32((d^(c|^a))+b+x1+0x85845dd1, 21)
a = b + bits.RotateLeft32((c^(b|^d))+a+x8+0x6fa87e4f, 6)
d = a + bits.RotateLeft32((b^(a|^c))+d+xf+0xfe2ce6e0, 10)
c = d + bits.RotateLeft32((a^(d|^b))+c+x6+0xa3014314, 15)
b = c + bits.RotateLeft32((d^(c|^a))+b+xd+0x4e0811a1, 21)
a = b + bits.RotateLeft32((c^(b|^d))+a+x4+0xf7537e82, 6)
d = a + bits.RotateLeft32((b^(a|^c))+d+xb+0xbd3af235, 10)
c = d + bits.RotateLeft32((a^(d|^b))+c+x2+0x2ad7d2bb, 15)
b = c + bits.RotateLeft32((d^(c|^a))+b+x9+0xeb86d391, 21)
// add saved state
a += aa
b += bb
c += cc
d += dd
}
// save state
dig.s[0], dig.s[1], dig.s[2], dig.s[3] = a, b, c, d
}

227
vendor/github.com/minio/md5-simd/block16_amd64.s generated vendored Normal file
View File

@ -0,0 +1,227 @@
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
// This is the AVX512 implementation of the MD5 block function (16-way parallel)
#define prep(index) \
KMOVQ kmask, ktmp \
VPGATHERDD index*4(base)(ptrs*1), ktmp, mem
#define ROUND1(a, b, c, d, index, const, shift) \
VXORPS c, tmp, tmp \
VPADDD 64*const(consts), a, a \
VPADDD mem, a, a \
VPTERNLOGD $0x6C, b, d, tmp \
prep(index) \
VPADDD tmp, a, a \
VPROLD $shift, a, a \
VMOVAPD c, tmp \
VPADDD b, a, a
#define ROUND1noload(a, b, c, d, const, shift) \
VXORPS c, tmp, tmp \
VPADDD 64*const(consts), a, a \
VPADDD mem, a, a \
VPTERNLOGD $0x6C, b, d, tmp \
VPADDD tmp, a, a \
VPROLD $shift, a, a \
VMOVAPD c, tmp \
VPADDD b, a, a
#define ROUND2(a, b, c, d, zreg, const, shift) \
VPADDD 64*const(consts), a, a \
VPADDD zreg, a, a \
VANDNPS c, tmp, tmp \
VPTERNLOGD $0xEC, b, tmp, tmp2 \
VMOVAPD c, tmp \
VPADDD tmp2, a, a \
VMOVAPD c, tmp2 \
VPROLD $shift, a, a \
VPADDD b, a, a
#define ROUND3(a, b, c, d, zreg, const, shift) \
VPADDD 64*const(consts), a, a \
VPADDD zreg, a, a \
VPTERNLOGD $0x96, b, d, tmp \
VPADDD tmp, a, a \
VPROLD $shift, a, a \
VMOVAPD b, tmp \
VPADDD b, a, a
#define ROUND4(a, b, c, d, zreg, const, shift) \
VPADDD 64*const(consts), a, a \
VPADDD zreg, a, a \
VPTERNLOGD $0x36, b, c, tmp \
VPADDD tmp, a, a \
VPROLD $shift, a, a \
VXORPS c, ones, tmp \
VPADDD b, a, a
TEXT ·block16(SB),4,$0-40
MOVQ state+0(FP), BX
MOVQ base+8(FP), SI
MOVQ ptrs+16(FP), AX
KMOVQ mask+24(FP), K1
MOVQ n+32(FP), DX
MOVQ ·avx512md5consts+0(SB), DI
#define a Z0
#define b Z1
#define c Z2
#define d Z3
#define sa Z4
#define sb Z5
#define sc Z6
#define sd Z7
#define tmp Z8
#define tmp2 Z9
#define ptrs Z10
#define ones Z12
#define mem Z15
#define kmask K1
#define ktmp K3
// ----------------------------------------------------------
// Registers Z16 through to Z31 are used for caching purposes
// ----------------------------------------------------------
#define dig BX
#define count DX
#define base SI
#define consts DI
// load digest into state registers
VMOVUPD (dig), a
VMOVUPD 0x40(dig), b
VMOVUPD 0x80(dig), c
VMOVUPD 0xc0(dig), d
// load source pointers
VMOVUPD 0x00(AX), ptrs
MOVQ $-1, AX
VPBROADCASTQ AX, ones
loop:
VMOVAPD a, sa
VMOVAPD b, sb
VMOVAPD c, sc
VMOVAPD d, sd
prep(0)
VMOVAPD d, tmp
VMOVAPD mem, Z16
ROUND1(a,b,c,d, 1,0x00, 7)
VMOVAPD mem, Z17
ROUND1(d,a,b,c, 2,0x01,12)
VMOVAPD mem, Z18
ROUND1(c,d,a,b, 3,0x02,17)
VMOVAPD mem, Z19
ROUND1(b,c,d,a, 4,0x03,22)
VMOVAPD mem, Z20
ROUND1(a,b,c,d, 5,0x04, 7)
VMOVAPD mem, Z21
ROUND1(d,a,b,c, 6,0x05,12)
VMOVAPD mem, Z22
ROUND1(c,d,a,b, 7,0x06,17)
VMOVAPD mem, Z23
ROUND1(b,c,d,a, 8,0x07,22)
VMOVAPD mem, Z24
ROUND1(a,b,c,d, 9,0x08, 7)
VMOVAPD mem, Z25
ROUND1(d,a,b,c,10,0x09,12)
VMOVAPD mem, Z26
ROUND1(c,d,a,b,11,0x0a,17)
VMOVAPD mem, Z27
ROUND1(b,c,d,a,12,0x0b,22)
VMOVAPD mem, Z28
ROUND1(a,b,c,d,13,0x0c, 7)
VMOVAPD mem, Z29
ROUND1(d,a,b,c,14,0x0d,12)
VMOVAPD mem, Z30
ROUND1(c,d,a,b,15,0x0e,17)
VMOVAPD mem, Z31
ROUND1noload(b,c,d,a, 0x0f,22)
VMOVAPD d, tmp
VMOVAPD d, tmp2
ROUND2(a,b,c,d, Z17,0x10, 5)
ROUND2(d,a,b,c, Z22,0x11, 9)
ROUND2(c,d,a,b, Z27,0x12,14)
ROUND2(b,c,d,a, Z16,0x13,20)
ROUND2(a,b,c,d, Z21,0x14, 5)
ROUND2(d,a,b,c, Z26,0x15, 9)
ROUND2(c,d,a,b, Z31,0x16,14)
ROUND2(b,c,d,a, Z20,0x17,20)
ROUND2(a,b,c,d, Z25,0x18, 5)
ROUND2(d,a,b,c, Z30,0x19, 9)
ROUND2(c,d,a,b, Z19,0x1a,14)
ROUND2(b,c,d,a, Z24,0x1b,20)
ROUND2(a,b,c,d, Z29,0x1c, 5)
ROUND2(d,a,b,c, Z18,0x1d, 9)
ROUND2(c,d,a,b, Z23,0x1e,14)
ROUND2(b,c,d,a, Z28,0x1f,20)
VMOVAPD c, tmp
ROUND3(a,b,c,d, Z21,0x20, 4)
ROUND3(d,a,b,c, Z24,0x21,11)
ROUND3(c,d,a,b, Z27,0x22,16)
ROUND3(b,c,d,a, Z30,0x23,23)
ROUND3(a,b,c,d, Z17,0x24, 4)
ROUND3(d,a,b,c, Z20,0x25,11)
ROUND3(c,d,a,b, Z23,0x26,16)
ROUND3(b,c,d,a, Z26,0x27,23)
ROUND3(a,b,c,d, Z29,0x28, 4)
ROUND3(d,a,b,c, Z16,0x29,11)
ROUND3(c,d,a,b, Z19,0x2a,16)
ROUND3(b,c,d,a, Z22,0x2b,23)
ROUND3(a,b,c,d, Z25,0x2c, 4)
ROUND3(d,a,b,c, Z28,0x2d,11)
ROUND3(c,d,a,b, Z31,0x2e,16)
ROUND3(b,c,d,a, Z18,0x2f,23)
VXORPS d, ones, tmp
ROUND4(a,b,c,d, Z16,0x30, 6)
ROUND4(d,a,b,c, Z23,0x31,10)
ROUND4(c,d,a,b, Z30,0x32,15)
ROUND4(b,c,d,a, Z21,0x33,21)
ROUND4(a,b,c,d, Z28,0x34, 6)
ROUND4(d,a,b,c, Z19,0x35,10)
ROUND4(c,d,a,b, Z26,0x36,15)
ROUND4(b,c,d,a, Z17,0x37,21)
ROUND4(a,b,c,d, Z24,0x38, 6)
ROUND4(d,a,b,c, Z31,0x39,10)
ROUND4(c,d,a,b, Z22,0x3a,15)
ROUND4(b,c,d,a, Z29,0x3b,21)
ROUND4(a,b,c,d, Z20,0x3c, 6)
ROUND4(d,a,b,c, Z27,0x3d,10)
ROUND4(c,d,a,b, Z18,0x3e,15)
ROUND4(b,c,d,a, Z25,0x3f,21)
VPADDD sa, a, a
VPADDD sb, b, b
VPADDD sc, c, c
VPADDD sd, d, d
LEAQ 64(base), base
SUBQ $64, count
JNE loop
VMOVUPD a, (dig)
VMOVUPD b, 0x40(dig)
VMOVUPD c, 0x80(dig)
VMOVUPD d, 0xc0(dig)
VZEROUPPER
RET

279
vendor/github.com/minio/md5-simd/block8_amd64.s generated vendored Normal file
View File

@ -0,0 +1,279 @@
// Copyright (c) 2018 Igneous Systems
// MIT License
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
// This is the AVX2 implementation of the MD5 block function (8-way parallel)
// block8(state *uint64, base uintptr, bufs *int32, cache *byte, n int)
TEXT ·block8(SB), 4, $0-40
MOVQ state+0(FP), BX
MOVQ base+8(FP), SI
MOVQ bufs+16(FP), AX
MOVQ cache+24(FP), CX
MOVQ n+32(FP), DX
MOVQ ·avx256md5consts+0(SB), DI
// Align cache (which is stack allocated by the compiler)
// to a 256 bit boundary (ymm register alignment)
// The cache8 type is deliberately oversized to permit this.
ADDQ $31, CX
ANDB $-32, CL
#define a Y0
#define b Y1
#define c Y2
#define d Y3
#define sa Y4
#define sb Y5
#define sc Y6
#define sd Y7
#define tmp Y8
#define tmp2 Y9
#define mask Y10
#define off Y11
#define ones Y12
#define rtmp1 Y13
#define rtmp2 Y14
#define mem Y15
#define dig BX
#define cache CX
#define count DX
#define base SI
#define consts DI
#define prepmask \
VXORPS mask, mask, mask \
VPCMPGTD mask, off, mask
#define prep(index) \
VMOVAPD mask, rtmp2 \
VPGATHERDD rtmp2, index*4(base)(off*1), mem
#define load(index) \
VMOVAPD index*32(cache), mem
#define store(index) \
VMOVAPD mem, index*32(cache)
#define roll(shift, a) \
VPSLLD $shift, a, rtmp1 \
VPSRLD $32-shift, a, a \
VORPS rtmp1, a, a
#define ROUND1(a, b, c, d, index, const, shift) \
VXORPS c, tmp, tmp \
VPADDD 32*const(consts), a, a \
VPADDD mem, a, a \
VANDPS b, tmp, tmp \
VXORPS d, tmp, tmp \
prep(index) \
VPADDD tmp, a, a \
roll(shift,a) \
VMOVAPD c, tmp \
VPADDD b, a, a
#define ROUND1load(a, b, c, d, index, const, shift) \
VXORPS c, tmp, tmp \
VPADDD 32*const(consts), a, a \
VPADDD mem, a, a \
VANDPS b, tmp, tmp \
VXORPS d, tmp, tmp \
load(index) \
VPADDD tmp, a, a \
roll(shift,a) \
VMOVAPD c, tmp \
VPADDD b, a, a
#define ROUND2(a, b, c, d, index, const, shift) \
VPADDD 32*const(consts), a, a \
VPADDD mem, a, a \
VANDPS b, tmp2, tmp2 \
VANDNPS c, tmp, tmp \
load(index) \
VORPS tmp, tmp2, tmp2 \
VMOVAPD c, tmp \
VPADDD tmp2, a, a \
VMOVAPD c, tmp2 \
roll(shift,a) \
VPADDD b, a, a
#define ROUND3(a, b, c, d, index, const, shift) \
VPADDD 32*const(consts), a, a \
VPADDD mem, a, a \
load(index) \
VXORPS d, tmp, tmp \
VXORPS b, tmp, tmp \
VPADDD tmp, a, a \
roll(shift,a) \
VMOVAPD b, tmp \
VPADDD b, a, a
#define ROUND4(a, b, c, d, index, const, shift) \
VPADDD 32*const(consts), a, a \
VPADDD mem, a, a \
VORPS b, tmp, tmp \
VXORPS c, tmp, tmp \
VPADDD tmp, a, a \
load(index) \
roll(shift,a) \
VXORPS c, ones, tmp \
VPADDD b, a, a
// load digest into state registers
VMOVUPD (dig), a
VMOVUPD 32(dig), b
VMOVUPD 64(dig), c
VMOVUPD 96(dig), d
// load source buffer offsets
VMOVUPD (AX), off
prepmask
VPCMPEQD ones, ones, ones
loop:
VMOVAPD a, sa
VMOVAPD b, sb
VMOVAPD c, sc
VMOVAPD d, sd
prep(0)
VMOVAPD d, tmp
store(0)
ROUND1(a,b,c,d, 1,0x00, 7)
store(1)
ROUND1(d,a,b,c, 2,0x01,12)
store(2)
ROUND1(c,d,a,b, 3,0x02,17)
store(3)
ROUND1(b,c,d,a, 4,0x03,22)
store(4)
ROUND1(a,b,c,d, 5,0x04, 7)
store(5)
ROUND1(d,a,b,c, 6,0x05,12)
store(6)
ROUND1(c,d,a,b, 7,0x06,17)
store(7)
ROUND1(b,c,d,a, 8,0x07,22)
store(8)
ROUND1(a,b,c,d, 9,0x08, 7)
store(9)
ROUND1(d,a,b,c,10,0x09,12)
store(10)
ROUND1(c,d,a,b,11,0x0a,17)
store(11)
ROUND1(b,c,d,a,12,0x0b,22)
store(12)
ROUND1(a,b,c,d,13,0x0c, 7)
store(13)
ROUND1(d,a,b,c,14,0x0d,12)
store(14)
ROUND1(c,d,a,b,15,0x0e,17)
store(15)
ROUND1load(b,c,d,a, 1,0x0f,22)
VMOVAPD d, tmp
VMOVAPD d, tmp2
ROUND2(a,b,c,d, 6,0x10, 5)
ROUND2(d,a,b,c,11,0x11, 9)
ROUND2(c,d,a,b, 0,0x12,14)
ROUND2(b,c,d,a, 5,0x13,20)
ROUND2(a,b,c,d,10,0x14, 5)
ROUND2(d,a,b,c,15,0x15, 9)
ROUND2(c,d,a,b, 4,0x16,14)
ROUND2(b,c,d,a, 9,0x17,20)
ROUND2(a,b,c,d,14,0x18, 5)
ROUND2(d,a,b,c, 3,0x19, 9)
ROUND2(c,d,a,b, 8,0x1a,14)
ROUND2(b,c,d,a,13,0x1b,20)
ROUND2(a,b,c,d, 2,0x1c, 5)
ROUND2(d,a,b,c, 7,0x1d, 9)
ROUND2(c,d,a,b,12,0x1e,14)
ROUND2(b,c,d,a, 0,0x1f,20)
load(5)
VMOVAPD c, tmp
ROUND3(a,b,c,d, 8,0x20, 4)
ROUND3(d,a,b,c,11,0x21,11)
ROUND3(c,d,a,b,14,0x22,16)
ROUND3(b,c,d,a, 1,0x23,23)
ROUND3(a,b,c,d, 4,0x24, 4)
ROUND3(d,a,b,c, 7,0x25,11)
ROUND3(c,d,a,b,10,0x26,16)
ROUND3(b,c,d,a,13,0x27,23)
ROUND3(a,b,c,d, 0,0x28, 4)
ROUND3(d,a,b,c, 3,0x29,11)
ROUND3(c,d,a,b, 6,0x2a,16)
ROUND3(b,c,d,a, 9,0x2b,23)
ROUND3(a,b,c,d,12,0x2c, 4)
ROUND3(d,a,b,c,15,0x2d,11)
ROUND3(c,d,a,b, 2,0x2e,16)
ROUND3(b,c,d,a, 0,0x2f,23)
load(0)
VXORPS d, ones, tmp
ROUND4(a,b,c,d, 7,0x30, 6)
ROUND4(d,a,b,c,14,0x31,10)
ROUND4(c,d,a,b, 5,0x32,15)
ROUND4(b,c,d,a,12,0x33,21)
ROUND4(a,b,c,d, 3,0x34, 6)
ROUND4(d,a,b,c,10,0x35,10)
ROUND4(c,d,a,b, 1,0x36,15)
ROUND4(b,c,d,a, 8,0x37,21)
ROUND4(a,b,c,d,15,0x38, 6)
ROUND4(d,a,b,c, 6,0x39,10)
ROUND4(c,d,a,b,13,0x3a,15)
ROUND4(b,c,d,a, 4,0x3b,21)
ROUND4(a,b,c,d,11,0x3c, 6)
ROUND4(d,a,b,c, 2,0x3d,10)
ROUND4(c,d,a,b, 9,0x3e,15)
ROUND4(b,c,d,a, 0,0x3f,21)
VPADDD sa, a, a
VPADDD sb, b, b
VPADDD sc, c, c
VPADDD sd, d, d
LEAQ 64(base), base
SUBQ $64, count
JNE loop
VMOVUPD a, (dig)
VMOVUPD b, 32(dig)
VMOVUPD c, 64(dig)
VMOVUPD d, 96(dig)
VZEROUPPER
RET

199
vendor/github.com/minio/md5-simd/block_amd64.go generated vendored Normal file
View File

@ -0,0 +1,199 @@
//+build !noasm,!appengine,gc
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
package md5simd
import (
"fmt"
"math"
"sync"
"unsafe"
"github.com/klauspost/cpuid"
)
var hasAVX512 bool
//go:noescape
func block8(state *uint32, base uintptr, bufs *int32, cache *byte, n int)
//go:noescape
func block16(state *uint32, base uintptr, ptrs *int32, mask uint64, n int)
// 8-way 4x uint32 digests in 4 ymm registers
// (ymm0, ymm1, ymm2, ymm3)
type digest8 struct {
v0, v1, v2, v3 [8]uint32
}
// Stack cache for 8x64 byte md5.BlockSize bytes.
// Must be 32-byte aligned, so allocate 512+32 and
// align upwards at runtime.
type cache8 [512 + 32]byte
// MD5 magic numbers for one lane of hashing; inflated
// 8x below at init time.
var md5consts = [64]uint32{
0xd76aa478, 0xe8c7b756, 0x242070db, 0xc1bdceee,
0xf57c0faf, 0x4787c62a, 0xa8304613, 0xfd469501,
0x698098d8, 0x8b44f7af, 0xffff5bb1, 0x895cd7be,
0x6b901122, 0xfd987193, 0xa679438e, 0x49b40821,
0xf61e2562, 0xc040b340, 0x265e5a51, 0xe9b6c7aa,
0xd62f105d, 0x02441453, 0xd8a1e681, 0xe7d3fbc8,
0x21e1cde6, 0xc33707d6, 0xf4d50d87, 0x455a14ed,
0xa9e3e905, 0xfcefa3f8, 0x676f02d9, 0x8d2a4c8a,
0xfffa3942, 0x8771f681, 0x6d9d6122, 0xfde5380c,
0xa4beea44, 0x4bdecfa9, 0xf6bb4b60, 0xbebfbc70,
0x289b7ec6, 0xeaa127fa, 0xd4ef3085, 0x04881d05,
0xd9d4d039, 0xe6db99e5, 0x1fa27cf8, 0xc4ac5665,
0xf4292244, 0x432aff97, 0xab9423a7, 0xfc93a039,
0x655b59c3, 0x8f0ccc92, 0xffeff47d, 0x85845dd1,
0x6fa87e4f, 0xfe2ce6e0, 0xa3014314, 0x4e0811a1,
0xf7537e82, 0xbd3af235, 0x2ad7d2bb, 0xeb86d391,
}
// inflate the consts 8-way for 8x md5 (256 bit ymm registers)
var avx256md5consts = func(c []uint32) []uint32 {
inf := make([]uint32, 8*len(c))
for i := range c {
for j := 0; j < 8; j++ {
inf[(i*8)+j] = c[i]
}
}
return inf
}(md5consts[:])
// 16-way 4x uint32 digests in 4 zmm registers
type digest16 struct {
v0, v1, v2, v3 [16]uint32
}
// inflate the consts 16-way for 16x md5 (512 bit zmm registers)
var avx512md5consts = func(c []uint32) []uint32 {
inf := make([]uint32, 16*len(c))
for i := range c {
for j := 0; j < 16; j++ {
inf[(i*16)+j] = c[i]
}
}
return inf
}(md5consts[:])
func init() {
hasAVX512 = cpuid.CPU.AVX512F()
}
// Interface function to assembly code
func (s *md5Server) blockMd5_x16(d *digest16, input [16][]byte, half bool) {
if hasAVX512 {
blockMd5_avx512(d, input, s.allBufs, &s.maskRounds16)
} else {
d8a, d8b := digest8{}, digest8{}
for i := range d8a.v0 {
j := i + 8
d8a.v0[i], d8a.v1[i], d8a.v2[i], d8a.v3[i] = d.v0[i], d.v1[i], d.v2[i], d.v3[i]
if !half {
d8b.v0[i], d8b.v1[i], d8b.v2[i], d8b.v3[i] = d.v0[j], d.v1[j], d.v2[j], d.v3[j]
}
}
i8 := [2][8][]byte{}
for i := range i8[0] {
i8[0][i], i8[1][i] = input[i], input[8+i]
}
if half {
blockMd5_avx2(&d8a, i8[0], s.allBufs, &s.maskRounds8a)
} else {
wg := sync.WaitGroup{}
wg.Add(2)
go func() { blockMd5_avx2(&d8a, i8[0], s.allBufs, &s.maskRounds8a); wg.Done() }()
go func() { blockMd5_avx2(&d8b, i8[1], s.allBufs, &s.maskRounds8b); wg.Done() }()
wg.Wait()
}
for i := range d8a.v0 {
j := i + 8
d.v0[i], d.v1[i], d.v2[i], d.v3[i] = d8a.v0[i], d8a.v1[i], d8a.v2[i], d8a.v3[i]
if !half {
d.v0[j], d.v1[j], d.v2[j], d.v3[j] = d8b.v0[i], d8b.v1[i], d8b.v2[i], d8b.v3[i]
}
}
}
}
// Interface function to AVX512 assembly code
func blockMd5_avx512(s *digest16, input [16][]byte, base []byte, maskRounds *[16]maskRounds) {
baseMin := uint64(uintptr(unsafe.Pointer(&(base[0]))))
ptrs := [16]int32{}
for i := range ptrs {
if len(input[i]) > 0 {
if len(input[i]) > internalBlockSize {
panic(fmt.Sprintf("Sanity check fails for lane %d: maximum input length cannot exceed internalBlockSize", i))
}
off := uint64(uintptr(unsafe.Pointer(&(input[i][0])))) - baseMin
if off > math.MaxUint32 {
panic(fmt.Sprintf("invalid buffer sent with offset %x", off))
}
ptrs[i] = int32(off)
}
}
sdup := *s // create copy of initial states to receive intermediate updates
rounds := generateMaskAndRounds16(input, maskRounds)
for r := 0; r < rounds; r++ {
m := maskRounds[r]
block16(&sdup.v0[0], uintptr(baseMin), &ptrs[0], m.mask, int(64*m.rounds))
for j := 0; j < len(ptrs); j++ {
ptrs[j] += int32(64 * m.rounds) // update pointers for next round
if m.mask&(1<<j) != 0 { // update digest if still masked as active
(*s).v0[j], (*s).v1[j], (*s).v2[j], (*s).v3[j] = sdup.v0[j], sdup.v1[j], sdup.v2[j], sdup.v3[j]
}
}
}
}
// Interface function to AVX2 assembly code
func blockMd5_avx2(s *digest8, input [8][]byte, base []byte, maskRounds *[8]maskRounds) {
baseMin := uint64(uintptr(unsafe.Pointer(&(base[0])))) - 4
ptrs := [8]int32{}
for i := range ptrs {
if len(input[i]) > 0 {
if len(input[i]) > internalBlockSize {
panic(fmt.Sprintf("Sanity check fails for lane %d: maximum input length cannot exceed internalBlockSize", i))
}
off := uint64(uintptr(unsafe.Pointer(&(input[i][0])))) - baseMin
if off > math.MaxUint32 {
panic(fmt.Sprintf("invalid buffer sent with offset %x", off))
}
ptrs[i] = int32(off)
}
}
sdup := *s // create copy of initial states to receive intermediate updates
rounds := generateMaskAndRounds8(input, maskRounds)
for r := 0; r < rounds; r++ {
m := maskRounds[r]
var cache cache8 // stack storage for block8 tmp state
block8(&sdup.v0[0], uintptr(baseMin), &ptrs[0], &cache[0], int(64*m.rounds))
for j := 0; j < len(ptrs); j++ {
ptrs[j] += int32(64 * m.rounds) // update pointers for next round
if m.mask&(1<<j) != 0 { // update digest if still masked as active
(*s).v0[j], (*s).v1[j], (*s).v2[j], (*s).v3[j] = sdup.v0[j], sdup.v1[j], sdup.v2[j], sdup.v3[j]
}
}
}
}

178
vendor/github.com/minio/md5-simd/md5-digest_amd64.go generated vendored Normal file
View File

@ -0,0 +1,178 @@
//+build !noasm,!appengine,gc
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
package md5simd
import (
"encoding/binary"
"errors"
"fmt"
"sync/atomic"
)
// md5Digest - Type for computing MD5 using either AVX2 or AVX512
type md5Digest struct {
uid uint64
blocksCh chan blockInput
cycleServer chan uint64
x [BlockSize]byte
nx int
len uint64
buffers <-chan []byte
}
// NewHash - initialize instance for Md5 implementation.
func (s *md5Server) NewHash() Hasher {
uid := atomic.AddUint64(&s.uidCounter, 1)
blockCh := make(chan blockInput, buffersPerLane)
s.newInput <- newClient{
uid: uid,
input: blockCh,
}
return &md5Digest{
uid: uid,
buffers: s.buffers,
blocksCh: blockCh,
cycleServer: s.cycle,
}
}
// Size - Return size of checksum
func (d *md5Digest) Size() int { return Size }
// BlockSize - Return blocksize of checksum
func (d md5Digest) BlockSize() int { return BlockSize }
func (d *md5Digest) Reset() {
if d.blocksCh == nil {
panic("reset after close")
}
d.nx = 0
d.len = 0
d.sendBlock(blockInput{uid: d.uid, reset: true}, false)
}
// write to digest
func (d *md5Digest) Write(p []byte) (nn int, err error) {
if d.blocksCh == nil {
return 0, errors.New("md5Digest closed")
}
// break input into chunks of maximum internalBlockSize size
for {
l := len(p)
if l > internalBlockSize {
l = internalBlockSize
}
nnn, err := d.write(p[:l])
if err != nil {
return nn, err
}
nn += nnn
p = p[l:]
if len(p) == 0 {
break
}
}
return
}
func (d *md5Digest) write(p []byte) (nn int, err error) {
nn = len(p)
d.len += uint64(nn)
if d.nx > 0 {
n := copy(d.x[d.nx:], p)
d.nx += n
if d.nx == BlockSize {
// Create a copy of the overflow buffer in order to send it async over the channel
// (since we will modify the overflow buffer down below with any access beyond multiples of 64)
tmp := <-d.buffers
tmp = tmp[:BlockSize]
copy(tmp, d.x[:])
d.sendBlock(blockInput{uid: d.uid, msg: tmp}, len(p)-n < BlockSize)
d.nx = 0
}
p = p[n:]
}
if len(p) >= BlockSize {
n := len(p) &^ (BlockSize - 1)
buf := <-d.buffers
buf = buf[:n]
copy(buf, p)
d.sendBlock(blockInput{uid: d.uid, msg: buf}, len(p)-n < BlockSize)
p = p[n:]
}
if len(p) > 0 {
d.nx = copy(d.x[:], p)
}
return
}
func (d *md5Digest) Close() {
if d.blocksCh != nil {
close(d.blocksCh)
d.blocksCh = nil
}
}
// Sum - Return MD5 sum in bytes
func (d *md5Digest) Sum(in []byte) (result []byte) {
if d.blocksCh == nil {
panic("sum after close")
}
trail := <-d.buffers
trail = append(trail[:0], d.x[:d.nx]...)
length := d.len
// Padding. Add a 1 bit and 0 bits until 56 bytes mod 64.
var tmp [64]byte
tmp[0] = 0x80
if length%64 < 56 {
trail = append(trail, tmp[0:56-length%64]...)
} else {
trail = append(trail, tmp[0:64+56-length%64]...)
}
// Length in bits.
length <<= 3
binary.LittleEndian.PutUint64(tmp[:], length) // append length in bits
trail = append(trail, tmp[0:8]...)
if len(trail)%BlockSize != 0 {
panic(fmt.Errorf("internal error: sum block was not aligned. len=%d, nx=%d", len(trail), d.nx))
}
sumCh := make(chan sumResult, 1)
d.sendBlock(blockInput{uid: d.uid, msg: trail, sumCh: sumCh}, true)
sum := <-sumCh
return append(in, sum.digest[:]...)
}
// sendBlock will send a block for processing.
// If cycle is true we will block on cycle, otherwise we will only block
// if the block channel is full.
func (d *md5Digest) sendBlock(bi blockInput, cycle bool) {
if cycle {
select {
case d.blocksCh <- bi:
d.cycleServer <- d.uid
}
return
}
// Only block on cycle if we filled the buffer
select {
case d.blocksCh <- bi:
return
default:
d.cycleServer <- d.uid
d.blocksCh <- bi
}
}

307
vendor/github.com/minio/md5-simd/md5-server_amd64.go generated vendored Normal file
View File

@ -0,0 +1,307 @@
//+build !noasm,!appengine,gc
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
package md5simd
import (
"encoding/binary"
"fmt"
"runtime"
"github.com/klauspost/cpuid"
)
// MD5 initialization constants
const (
// Lanes is the number of concurrently calculated hashes.
Lanes = 16
init0 = 0x67452301
init1 = 0xefcdab89
init2 = 0x98badcfe
init3 = 0x10325476
)
// md5ServerUID - Does not start at 0 but next multiple of 16 so as to be able to
// differentiate with default initialisation value of 0
const md5ServerUID = Lanes
const buffersPerLane = 3
// Message to send across input channel
type blockInput struct {
uid uint64
msg []byte
sumCh chan sumResult
reset bool
}
type sumResult struct {
digest [Size]byte
}
type lanesInfo [Lanes]blockInput
// md5Server - Type to implement parallel handling of MD5 invocations
type md5Server struct {
uidCounter uint64
cycle chan uint64 // client with uid has update.
newInput chan newClient // Add new client.
digests map[uint64][Size]byte // Map of uids to (interim) digest results
maskRounds16 [16]maskRounds // Pre-allocated static array for max 16 rounds
maskRounds8a [8]maskRounds // Pre-allocated static array for max 8 rounds (1st AVX2 core)
maskRounds8b [8]maskRounds // Pre-allocated static array for max 8 rounds (2nd AVX2 core)
allBufs []byte // Preallocated buffer.
buffers chan []byte // Preallocated buffers, sliced from allBufs.
}
// NewServer - Create new object for parallel processing handling
func NewServer() Server {
if !cpuid.CPU.AVX2() {
return &fallbackServer{}
}
md5srv := &md5Server{}
md5srv.digests = make(map[uint64][Size]byte)
md5srv.newInput = make(chan newClient, Lanes)
md5srv.cycle = make(chan uint64, Lanes*10)
md5srv.uidCounter = md5ServerUID - 1
md5srv.allBufs = make([]byte, 32+buffersPerLane*Lanes*internalBlockSize)
md5srv.buffers = make(chan []byte, buffersPerLane*Lanes)
// Fill buffers.
for i := 0; i < buffersPerLane*Lanes; i++ {
s := 32 + i*internalBlockSize
md5srv.buffers <- md5srv.allBufs[s : s+internalBlockSize : s+internalBlockSize]
}
// Start a single thread for reading from the input channel
go md5srv.process(md5srv.newInput)
return md5srv
}
type newClient struct {
uid uint64
input chan blockInput
}
// process - Sole handler for reading from the input channel.
func (s *md5Server) process(newClients chan newClient) {
// To fill up as many lanes as possible:
//
// 1. Wait for a cycle id.
// 2. If not already in a lane, add, otherwise leave on channel
// 3. Start timer
// 4. Check if lanes is full, if so, goto 10 (process).
// 5. If timeout, goto 10.
// 6. Wait for new id (goto 2) or timeout (goto 10).
// 10. Process.
// 11. Check all input if there is already input, if so add to lanes.
// 12. Goto 1
// lanes contains the lanes.
var lanes lanesInfo
// lanesFilled contains the number of filled lanes for current cycle.
var lanesFilled int
// clients contains active clients
var clients = make(map[uint64]chan blockInput, Lanes)
addToLane := func(uid uint64) {
cl, ok := clients[uid]
if !ok {
// Unknown client. Maybe it was already removed.
return
}
// Check if we already have it.
for _, lane := range lanes[:lanesFilled] {
if lane.uid == uid {
return
}
}
// Continue until we get a block or there is nothing on channel
for {
select {
case block, ok := <-cl:
if !ok {
// Client disconnected
delete(clients, block.uid)
return
}
if block.uid != uid {
panic(fmt.Errorf("uid mismatch, %d (block) != %d (client)", block.uid, uid))
}
// If reset message, reset and we're done
if block.reset {
delete(s.digests, uid)
continue
}
// If requesting sum, we will need to maintain state.
if block.sumCh != nil {
var dig digest
d, ok := s.digests[uid]
if ok {
dig.s[0] = binary.LittleEndian.Uint32(d[0:4])
dig.s[1] = binary.LittleEndian.Uint32(d[4:8])
dig.s[2] = binary.LittleEndian.Uint32(d[8:12])
dig.s[3] = binary.LittleEndian.Uint32(d[12:16])
} else {
dig.s[0], dig.s[1], dig.s[2], dig.s[3] = init0, init1, init2, init3
}
sum := sumResult{}
// Add end block to current digest.
blockGeneric(&dig, block.msg)
binary.LittleEndian.PutUint32(sum.digest[0:], dig.s[0])
binary.LittleEndian.PutUint32(sum.digest[4:], dig.s[1])
binary.LittleEndian.PutUint32(sum.digest[8:], dig.s[2])
binary.LittleEndian.PutUint32(sum.digest[12:], dig.s[3])
block.sumCh <- sum
if block.msg != nil {
s.buffers <- block.msg
}
continue
}
if len(block.msg) == 0 {
continue
}
lanes[lanesFilled] = block
lanesFilled++
return
default:
return
}
}
}
addNewClient := func(cl newClient) {
if _, ok := clients[cl.uid]; ok {
panic("internal error: duplicate client registration")
}
clients[cl.uid] = cl.input
}
allLanesFilled := func() bool {
return lanesFilled == Lanes || lanesFilled >= len(clients)
}
for {
// Step 1.
for lanesFilled == 0 {
select {
case cl, ok := <-newClients:
if !ok {
return
}
addNewClient(cl)
// Check if it already sent a payload.
addToLane(cl.uid)
continue
case uid := <-s.cycle:
addToLane(uid)
}
}
fillLanes:
for !allLanesFilled() {
select {
case cl, ok := <-newClients:
if !ok {
return
}
addNewClient(cl)
case uid := <-s.cycle:
addToLane(uid)
default:
// Nothing more queued...
break fillLanes
}
}
// If we did not fill all lanes, check if there is more waiting
if !allLanesFilled() {
runtime.Gosched()
for uid := range clients {
addToLane(uid)
if allLanesFilled() {
break
}
}
}
if false {
if !allLanesFilled() {
fmt.Println("Not all lanes filled", lanesFilled, "of", len(clients))
//pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
} else if true {
fmt.Println("all lanes filled")
}
}
// Process the lanes we could collect
s.blocks(lanes[:lanesFilled])
// Clear lanes...
lanesFilled = 0
// Add all current queued
for uid := range clients {
addToLane(uid)
if allLanesFilled() {
break
}
}
}
}
func (s *md5Server) Close() {
if s.newInput != nil {
close(s.newInput)
s.newInput = nil
}
}
// Invoke assembly and send results back
func (s *md5Server) blocks(lanes []blockInput) {
inputs := [16][]byte{}
for i := range lanes {
inputs[i] = lanes[i].msg
}
// Collect active digests...
state := s.getDigests(lanes)
// Process all lanes...
s.blockMd5_x16(&state, inputs, len(lanes) <= 8)
for i, lane := range lanes {
uid := lane.uid
dig := [Size]byte{}
binary.LittleEndian.PutUint32(dig[0:], state.v0[i])
binary.LittleEndian.PutUint32(dig[4:], state.v1[i])
binary.LittleEndian.PutUint32(dig[8:], state.v2[i])
binary.LittleEndian.PutUint32(dig[12:], state.v3[i])
s.digests[uid] = dig
if lane.msg != nil {
s.buffers <- lane.msg
}
lanes[i] = blockInput{}
}
}
func (s *md5Server) getDigests(lanes []blockInput) (d digest16) {
for i, lane := range lanes {
a, ok := s.digests[lane.uid]
if ok {
d.v0[i] = binary.LittleEndian.Uint32(a[0:4])
d.v1[i] = binary.LittleEndian.Uint32(a[4:8])
d.v2[i] = binary.LittleEndian.Uint32(a[8:12])
d.v3[i] = binary.LittleEndian.Uint32(a[12:16])
} else {
d.v0[i] = init0
d.v1[i] = init1
d.v2[i] = init2
d.v3[i] = init3
}
}
return
}

View File

@ -0,0 +1,12 @@
//+build !amd64 appengine !gc noasm
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
package md5simd
// NewServer - Create new object for parallel processing handling
func NewServer() *fallbackServer {
return &fallbackServer{}
}

70
vendor/github.com/minio/md5-simd/md5-util_amd64.go generated vendored Normal file
View File

@ -0,0 +1,70 @@
// Copyright (c) 2020 MinIO Inc. All rights reserved.
// Use of this source code is governed by a license that can be
// found in the LICENSE file.
package md5simd
import (
"sort"
)
// Helper struct for sorting blocks based on length
type lane struct {
len uint
pos uint
}
// Helper struct for generating number of rounds in combination with mask for valid lanes
type maskRounds struct {
mask uint64
rounds uint64
}
func generateMaskAndRounds8(input [8][]byte, mr *[8]maskRounds) (rounds int) {
// Sort on blocks length small to large
var sorted [8]lane
for c, inpt := range input {
sorted[c] = lane{uint(len(inpt)), uint(c)}
}
sort.Slice(sorted[:], func(i, j int) bool { return sorted[i].len < sorted[j].len })
// Create mask array including 'rounds' (of processing blocks of 64 bytes) between masks
m, round := uint64(0xff), uint64(0)
for _, s := range sorted {
if s.len > 0 {
if uint64(s.len)>>6 > round {
mr[rounds] = maskRounds{m, (uint64(s.len) >> 6) - round}
rounds++
}
round = uint64(s.len) >> 6
}
m = m & ^(1 << uint(s.pos))
}
return
}
func generateMaskAndRounds16(input [16][]byte, mr *[16]maskRounds) (rounds int) {
// Sort on blocks length small to large
var sorted [16]lane
for c, inpt := range input {
sorted[c] = lane{uint(len(inpt)), uint(c)}
}
sort.Slice(sorted[:], func(i, j int) bool { return sorted[i].len < sorted[j].len })
// Create mask array including 'rounds' (of processing blocks of 64 bytes) between masks
m, round := uint64(0xffff), uint64(0)
for _, s := range sorted {
if s.len > 0 {
if uint64(s.len)>>6 > round {
mr[rounds] = maskRounds{m, (uint64(s.len) >> 6) - round}
rounds++
}
round = uint64(s.len) >> 6
}
m = m & ^(1 << uint(s.pos))
}
return
}

57
vendor/github.com/minio/md5-simd/md5.go generated vendored Normal file
View File

@ -0,0 +1,57 @@
package md5simd
import (
"crypto/md5"
"hash"
"sync"
)
const (
// The blocksize of MD5 in bytes.
BlockSize = 64
// The size of an MD5 checksum in bytes.
Size = 16
// internalBlockSize is the internal block size.
internalBlockSize = 32 << 10
)
type Server interface {
NewHash() Hasher
Close()
}
type Hasher interface {
hash.Hash
Close()
}
// md5Wrapper is a wrapper around the builtin hasher.
type md5Wrapper struct {
hash.Hash
}
var md5Pool = sync.Pool{New: func() interface{} {
return md5.New()
}}
// fallbackServer - Fallback when no assembly is available.
type fallbackServer struct {
}
// NewHash -- return regular Golang md5 hashing from crypto
func (s *fallbackServer) NewHash() Hasher {
return &md5Wrapper{Hash: md5Pool.New().(hash.Hash)}
}
func (s *fallbackServer) Close() {
}
func (m *md5Wrapper) Close() {
if m.Hash != nil {
m.Reset()
md5Pool.Put(m.Hash)
m.Hash = nil
}
}

4
vendor/github.com/minio/minio-go/v7/.gitignore generated vendored Normal file
View File

@ -0,0 +1,4 @@
*~
*.test
validator
golangci-lint

27
vendor/github.com/minio/minio-go/v7/.golangci.yml generated vendored Normal file
View File

@ -0,0 +1,27 @@
linters-settings:
misspell:
locale: US
linters:
disable-all: true
enable:
- typecheck
- goimports
- misspell
- revive
- govet
- ineffassign
- gosimple
- deadcode
- structcheck
- gocritic
issues:
exclude-use-default: false
exclude:
# todo fix these when we get enough time.
- "singleCaseSwitch: should rewrite switch statement to if statement"
- "unlambda: replace"
- "captLocal:"
- "ifElseChain:"
- "elseif:"

1
vendor/github.com/minio/minio-go/v7/CNAME generated vendored Normal file
View File

@ -0,0 +1 @@
minio-go.min.io

22
vendor/github.com/minio/minio-go/v7/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,22 @@
### Developer Guidelines
``minio-go`` welcomes your contribution. To make the process as seamless as possible, we ask for the following:
* Go ahead and fork the project and make your changes. We encourage pull requests to discuss code changes.
- Fork it
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Add some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
* When you're ready to create a pull request, be sure to:
- Have test cases for the new code. If you have questions about how to do it, please ask in your pull request.
- Run `go fmt`
- Squash your commits into a single commit. `git rebase -i`. It's okay to force update your pull request.
- Make sure `go test -race ./...` and `go build` completes.
NOTE: go test runs functional tests and requires you to have a AWS S3 account. Set them as environment variables
``ACCESS_KEY`` and ``SECRET_KEY``. To run shorter version of the tests please use ``go test -short -race ./...``
* Read [Effective Go](https://github.com/golang/go/wiki/CodeReviewComments) article from Golang project
- `minio-go` project is strictly conformant with Golang style
- if you happen to observe offending code, please feel free to send a pull request

202
vendor/github.com/minio/minio-go/v7/LICENSE generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

35
vendor/github.com/minio/minio-go/v7/MAINTAINERS.md generated vendored Normal file
View File

@ -0,0 +1,35 @@
# For maintainers only
## Responsibilities
Please go through this link [Maintainer Responsibility](https://gist.github.com/abperiasamy/f4d9b31d3186bbd26522)
### Making new releases
Tag and sign your release commit, additionally this step requires you to have access to MinIO's trusted private key.
```sh
$ export GNUPGHOME=/media/${USER}/minio/trusted
$ git tag -s 4.0.0
$ git push
$ git push --tags
```
### Update version
Once release has been made update `libraryVersion` constant in `api.go` to next to be released version.
```sh
$ grep libraryVersion api.go
libraryVersion = "4.0.1"
```
Commit your changes
```
$ git commit -a -m "Update version for next release" --author "MinIO Trusted <trusted@min.io>"
```
### Announce
Announce new release by adding release notes at https://github.com/minio/minio-go/releases from `trusted@min.io` account. Release notes requires two sections `highlights` and `changelog`. Highlights is a bulleted list of salient features in this release and Changelog contains list of all commits since the last release.
To generate `changelog`
```sh
$ git log --no-color --pretty=format:'-%d %s (%cr) <%an>' <last_release_tag>..<latest_release_tag>
```

35
vendor/github.com/minio/minio-go/v7/Makefile generated vendored Normal file
View File

@ -0,0 +1,35 @@
GOPATH := $(shell go env GOPATH)
TMPDIR := $(shell mktemp -d)
all: checks
.PHONY: examples docs
checks: lint vet test examples functional-test
lint:
@mkdir -p ${GOPATH}/bin
@echo "Installing golangci-lint" && curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v1.45.2
@echo "Running $@ check"
@GO111MODULE=on ${GOPATH}/bin/golangci-lint cache clean
@GO111MODULE=on ${GOPATH}/bin/golangci-lint run --timeout=5m --config ./.golangci.yml
vet:
@GO111MODULE=on go vet ./...
test:
@GO111MODULE=on SERVER_ENDPOINT=localhost:9000 ACCESS_KEY=minio SECRET_KEY=minio123 ENABLE_HTTPS=1 MINT_MODE=full go test -race -v ./...
examples:
@echo "Building s3 examples"
@cd ./examples/s3 && $(foreach v,$(wildcard examples/s3/*.go),go build -mod=mod -o ${TMPDIR}/$(basename $(v)) $(notdir $(v)) || exit 1;)
@echo "Building minio examples"
@cd ./examples/minio && $(foreach v,$(wildcard examples/minio/*.go),go build -mod=mod -o ${TMPDIR}/$(basename $(v)) $(notdir $(v)) || exit 1;)
functional-test:
@GO111MODULE=on SERVER_ENDPOINT=localhost:9000 ACCESS_KEY=minio SECRET_KEY=minio123 ENABLE_HTTPS=1 MINT_MODE=full go run functional_tests.go
clean:
@echo "Cleaning up all the generated files"
@find . -name '*.test' | xargs rm -fv
@find . -name '*~' | xargs rm -fv

9
vendor/github.com/minio/minio-go/v7/NOTICE generated vendored Normal file
View File

@ -0,0 +1,9 @@
MinIO Cloud Storage, (C) 2014-2020 MinIO, Inc.
This product includes software developed at MinIO, Inc.
(https://min.io/).
The MinIO project contains unmodified/modified subcomponents too with
separate copyright notices and license terms. Your use of the source
code for these subcomponents is subject to the terms and conditions
of Apache License Version 2.0

246
vendor/github.com/minio/minio-go/v7/README.md generated vendored Normal file
View File

@ -0,0 +1,246 @@
# MinIO Go Client SDK for Amazon S3 Compatible Cloud Storage [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Sourcegraph](https://sourcegraph.com/github.com/minio/minio-go/-/badge.svg)](https://sourcegraph.com/github.com/minio/minio-go?badge) [![Apache V2 License](https://img.shields.io/badge/license-Apache%20V2-blue.svg)](https://github.com/minio/minio-go/blob/master/LICENSE)
The MinIO Go Client SDK provides simple APIs to access any Amazon S3 compatible object storage.
This quickstart guide will show you how to install the MinIO client SDK, connect to MinIO, and provide a walkthrough for a simple file uploader. For a complete list of APIs and examples, please take a look at the [Go Client API Reference](https://docs.min.io/docs/golang-client-api-reference).
This document assumes that you have a working [Go development environment](https://golang.org/doc/install).
## Download from Github
```sh
go get github.com/minio/minio-go/v7
```
## Initialize MinIO Client
MinIO client requires the following four parameters specified to connect to an Amazon S3 compatible object storage.
| Parameter | Description|
| :--- | :--- |
| endpoint | URL to object storage service. |
| _minio.Options_ | All the options such as credentials, custom transport etc. |
```go
package main
import (
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
// Initialize minio client object.
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
log.Printf("%#v\n", minioClient) // minioClient is now set up
}
```
## Quick Start Example - File Uploader
This example program connects to an object storage server, creates a bucket and uploads a file to the bucket.
We will use the MinIO server running at [https://play.min.io](https://play.min.io) in this example. Feel free to use this service for testing and development. Access credentials shown in this example are open to the public.
### FileUploader.go
```go
package main
import (
"context"
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
ctx := context.Background()
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
// Initialize minio client object.
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
// Make a new bucket called mymusic.
bucketName := "mymusic"
location := "us-east-1"
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
if err != nil {
// Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
if errBucketExists == nil && exists {
log.Printf("We already own %s\n", bucketName)
} else {
log.Fatalln(err)
}
} else {
log.Printf("Successfully created %s\n", bucketName)
}
// Upload the zip file
objectName := "golden-oldies.zip"
filePath := "/tmp/golden-oldies.zip"
contentType := "application/zip"
// Upload the zip file with FPutObject
info, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
if err != nil {
log.Fatalln(err)
}
log.Printf("Successfully uploaded %s of size %d\n", objectName, info.Size)
}
```
### Run FileUploader
```sh
go run file-uploader.go
2016/08/13 17:03:28 Successfully created mymusic
2016/08/13 17:03:40 Successfully uploaded golden-oldies.zip of size 16253413
mc ls play/mymusic/
[2016-05-27 16:02:16 PDT] 17MiB golden-oldies.zip
```
## API Reference
The full API Reference is available here.
* [Complete API Reference](https://docs.min.io/docs/golang-client-api-reference)
### API Reference : Bucket Operations
* [`MakeBucket`](https://docs.min.io/docs/golang-client-api-reference#MakeBucket)
* [`ListBuckets`](https://docs.min.io/docs/golang-client-api-reference#ListBuckets)
* [`BucketExists`](https://docs.min.io/docs/golang-client-api-reference#BucketExists)
* [`RemoveBucket`](https://docs.min.io/docs/golang-client-api-reference#RemoveBucket)
* [`ListObjects`](https://docs.min.io/docs/golang-client-api-reference#ListObjects)
* [`ListIncompleteUploads`](https://docs.min.io/docs/golang-client-api-reference#ListIncompleteUploads)
### API Reference : Bucket policy Operations
* [`SetBucketPolicy`](https://docs.min.io/docs/golang-client-api-reference#SetBucketPolicy)
* [`GetBucketPolicy`](https://docs.min.io/docs/golang-client-api-reference#GetBucketPolicy)
### API Reference : Bucket notification Operations
* [`SetBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#SetBucketNotification)
* [`GetBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#GetBucketNotification)
* [`RemoveAllBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#RemoveAllBucketNotification)
* [`ListenBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#ListenBucketNotification) (MinIO Extension)
* [`ListenNotification`](https://docs.min.io/docs/golang-client-api-reference#ListenNotification) (MinIO Extension)
### API Reference : File Object Operations
* [`FPutObject`](https://docs.min.io/docs/golang-client-api-reference#FPutObject)
* [`FGetObject`](https://docs.min.io/docs/golang-client-api-reference#FGetObject)
### API Reference : Object Operations
* [`GetObject`](https://docs.min.io/docs/golang-client-api-reference#GetObject)
* [`PutObject`](https://docs.min.io/docs/golang-client-api-reference#PutObject)
* [`PutObjectStreaming`](https://docs.min.io/docs/golang-client-api-reference#PutObjectStreaming)
* [`StatObject`](https://docs.min.io/docs/golang-client-api-reference#StatObject)
* [`CopyObject`](https://docs.min.io/docs/golang-client-api-reference#CopyObject)
* [`RemoveObject`](https://docs.min.io/docs/golang-client-api-reference#RemoveObject)
* [`RemoveObjects`](https://docs.min.io/docs/golang-client-api-reference#RemoveObjects)
* [`RemoveIncompleteUpload`](https://docs.min.io/docs/golang-client-api-reference#RemoveIncompleteUpload)
* [`SelectObjectContent`](https://docs.min.io/docs/golang-client-api-reference#SelectObjectContent)
### API Reference : Presigned Operations
* [`PresignedGetObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedGetObject)
* [`PresignedPutObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedPutObject)
* [`PresignedHeadObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedHeadObject)
* [`PresignedPostPolicy`](https://docs.min.io/docs/golang-client-api-reference#PresignedPostPolicy)
### API Reference : Client custom settings
* [`SetAppInfo`](https://docs.min.io/docs/golang-client-api-reference#SetAppInfo)
* [`TraceOn`](https://docs.min.io/docs/golang-client-api-reference#TraceOn)
* [`TraceOff`](https://docs.min.io/docs/golang-client-api-reference#TraceOff)
## Full Examples
### Full Examples : Bucket Operations
* [makebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/makebucket.go)
* [listbuckets.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbuckets.go)
* [bucketexists.go](https://github.com/minio/minio-go/blob/master/examples/s3/bucketexists.go)
* [removebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucket.go)
* [listobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjects.go)
* [listobjectsV2.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjectsV2.go)
* [listincompleteuploads.go](https://github.com/minio/minio-go/blob/master/examples/s3/listincompleteuploads.go)
### Full Examples : Bucket policy Operations
* [setbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketpolicy.go)
* [getbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketpolicy.go)
* [listbucketpolicies.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbucketpolicies.go)
### Full Examples : Bucket lifecycle Operations
* [setbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketlifecycle.go)
* [getbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketlifecycle.go)
### Full Examples : Bucket encryption Operations
* [setbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketencryption.go)
* [getbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketencryption.go)
* [deletebucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/deletebucketencryption.go)
### Full Examples : Bucket replication Operations
* [setbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketreplication.go)
* [getbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketreplication.go)
* [removebucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketreplication.go)
### Full Examples : Bucket notification Operations
* [setbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketnotification.go)
* [getbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketnotification.go)
* [removeallbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeallbucketnotification.go)
* [listenbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listenbucketnotification.go) (MinIO Extension)
* [listennotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listen-notification.go) (MinIO Extension)
### Full Examples : File Object Operations
* [fputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject.go)
* [fgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject.go)
### Full Examples : Object Operations
* [putobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject.go)
* [getobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject.go)
* [statobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/statobject.go)
* [copyobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/copyobject.go)
* [removeobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobject.go)
* [removeincompleteupload.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeincompleteupload.go)
* [removeobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobjects.go)
### Full Examples : Encrypted Object Operations
* [put-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/put-encrypted-object.go)
* [get-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/get-encrypted-object.go)
* [fput-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputencrypted-object.go)
### Full Examples : Presigned Operations
* [presignedgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedgetobject.go)
* [presignedputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedputobject.go)
* [presignedheadobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedheadobject.go)
* [presignedpostpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedpostpolicy.go)
## Explore Further
* [Complete Documentation](https://docs.min.io)
* [MinIO Go Client SDK API Reference](https://docs.min.io/docs/golang-client-api-reference)
## Contribute
[Contributors Guide](https://github.com/minio/minio-go/blob/master/CONTRIBUTING.md)
## License
This SDK is distributed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0), see [LICENSE](https://github.com/minio/minio-go/blob/master/LICENSE) and [NOTICE](https://github.com/minio/minio-go/blob/master/NOTICE) for more information.

260
vendor/github.com/minio/minio-go/v7/README_zh_CN.md generated vendored Normal file
View File

@ -0,0 +1,260 @@
# 适用于与Amazon S3兼容云存储的MinIO Go SDK [![Slack](https://slack.min.io/slack?type=svg)](https://slack.min.io) [![Sourcegraph](https://sourcegraph.com/github.com/minio/minio-go/-/badge.svg)](https://sourcegraph.com/github.com/minio/minio-go?badge)
MinIO Go Client SDK提供了简单的API来访问任何与Amazon S3兼容的对象存储服务。
**支持的云存储:**
- AWS Signature Version 4
- Amazon S3
- MinIO
- AWS Signature Version 2
- Google Cloud Storage (兼容模式)
- Openstack Swift + Swift3 middleware
- Ceph Object Gateway
- Riak CS
本文我们将学习如何安装MinIO client SDK连接到MinIO并提供一下文件上传的示例。对于完整的API以及示例请参考[Go Client API Reference](https://docs.min.io/docs/golang-client-api-reference)。
本文假设你已经有 [Go开发环境](https://golang.org/doc/install)。
## 从Github下载
```sh
go get -u github.com/minio/minio-go
```
## 初始化MinIO Client
MinIO client需要以下4个参数来连接与Amazon S3兼容的对象存储。
| 参数 | 描述|
| :--- | :--- |
| endpoint | 对象存储服务的URL |
| accessKeyID | Access key是唯一标识你的账户的用户ID。 |
| secretAccessKey | Secret key是你账户的密码。 |
| secure | true代表使用HTTPS |
```go
package main
import (
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
// 初使化 minio client对象。
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
log.Printf("%#v\n", minioClient) // minioClient初使化成功
}
```
## 示例-文件上传
本示例连接到一个对象存储服务,创建一个存储桶并上传一个文件到存储桶中。
我们在本示例中使用运行在 [https://play.min.io](https://play.min.io) 上的MinIO服务你可以用这个服务来开发和测试。示例中的访问凭据是公开的。
### FileUploader.go
```go
package main
import (
"context"
"log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
func main() {
ctx := context.Background()
endpoint := "play.min.io"
accessKeyID := "Q3AM3UQ867SPQQA43P2F"
secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
useSSL := true
// 初使化 minio client对象。
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
Secure: useSSL,
})
if err != nil {
log.Fatalln(err)
}
// 创建一个叫mymusic的存储桶。
bucketName := "mymusic"
location := "us-east-1"
err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
if err != nil {
// 检查存储桶是否已经存在。
exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
if errBucketExists == nil && exists {
log.Printf("We already own %s\n", bucketName)
} else {
log.Fatalln(err)
}
} else {
log.Printf("Successfully created %s\n", bucketName)
}
// 上传一个zip文件。
objectName := "golden-oldies.zip"
filePath := "/tmp/golden-oldies.zip"
contentType := "application/zip"
// 使用FPutObject上传一个zip文件。
n, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
if err != nil {
log.Fatalln(err)
}
log.Printf("Successfully uploaded %s of size %d\n", objectName, n)
}
```
### 运行FileUploader
```sh
go run file-uploader.go
2016/08/13 17:03:28 Successfully created mymusic
2016/08/13 17:03:40 Successfully uploaded golden-oldies.zip of size 16253413
mc ls play/mymusic/
[2016-05-27 16:02:16 PDT] 17MiB golden-oldies.zip
```
## API文档
完整的API文档在这里。
* [完整API文档](https://docs.min.io/docs/golang-client-api-reference)
### API文档 : 操作存储桶
* [`MakeBucket`](https://docs.min.io/docs/golang-client-api-reference#MakeBucket)
* [`ListBuckets`](https://docs.min.io/docs/golang-client-api-reference#ListBuckets)
* [`BucketExists`](https://docs.min.io/docs/golang-client-api-reference#BucketExists)
* [`RemoveBucket`](https://docs.min.io/docs/golang-client-api-reference#RemoveBucket)
* [`ListObjects`](https://docs.min.io/docs/golang-client-api-reference#ListObjects)
* [`ListIncompleteUploads`](https://docs.min.io/docs/golang-client-api-reference#ListIncompleteUploads)
### API文档 : 存储桶策略
* [`SetBucketPolicy`](https://docs.min.io/docs/golang-client-api-reference#SetBucketPolicy)
* [`GetBucketPolicy`](https://docs.min.io/docs/golang-client-api-reference#GetBucketPolicy)
### API文档 : 存储桶通知
* [`SetBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#SetBucketNotification)
* [`GetBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#GetBucketNotification)
* [`RemoveAllBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#RemoveAllBucketNotification)
* [`ListenBucketNotification`](https://docs.min.io/docs/golang-client-api-reference#ListenBucketNotification) (MinIO 扩展)
* [`ListenNotification`](https://docs.min.io/docs/golang-client-api-reference#ListenNotification) (MinIO 扩展)
### API文档 : 操作文件对象
* [`FPutObject`](https://docs.min.io/docs/golang-client-api-reference#FPutObject)
* [`FGetObject`](https://docs.min.io/docs/golang-client-api-reference#FPutObject)
### API文档 : 操作对象
* [`GetObject`](https://docs.min.io/docs/golang-client-api-reference#GetObject)
* [`PutObject`](https://docs.min.io/docs/golang-client-api-reference#PutObject)
* [`PutObjectStreaming`](https://docs.min.io/docs/golang-client-api-reference#PutObjectStreaming)
* [`StatObject`](https://docs.min.io/docs/golang-client-api-reference#StatObject)
* [`CopyObject`](https://docs.min.io/docs/golang-client-api-reference#CopyObject)
* [`RemoveObject`](https://docs.min.io/docs/golang-client-api-reference#RemoveObject)
* [`RemoveObjects`](https://docs.min.io/docs/golang-client-api-reference#RemoveObjects)
* [`RemoveIncompleteUpload`](https://docs.min.io/docs/golang-client-api-reference#RemoveIncompleteUpload)
* [`SelectObjectContent`](https://docs.min.io/docs/golang-client-api-reference#SelectObjectContent)
### API文档 : Presigned操作
* [`PresignedGetObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedGetObject)
* [`PresignedPutObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedPutObject)
* [`PresignedHeadObject`](https://docs.min.io/docs/golang-client-api-reference#PresignedHeadObject)
* [`PresignedPostPolicy`](https://docs.min.io/docs/golang-client-api-reference#PresignedPostPolicy)
### API文档 : 客户端自定义设置
* [`SetAppInfo`](http://docs.min.io/docs/golang-client-api-reference#SetAppInfo)
* [`TraceOn`](http://docs.min.io/docs/golang-client-api-reference#TraceOn)
* [`TraceOff`](http://docs.min.io/docs/golang-client-api-reference#TraceOff)
## 完整示例
### 完整示例 : 操作存储桶
* [makebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/makebucket.go)
* [listbuckets.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbuckets.go)
* [bucketexists.go](https://github.com/minio/minio-go/blob/master/examples/s3/bucketexists.go)
* [removebucket.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucket.go)
* [listobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjects.go)
* [listobjectsV2.go](https://github.com/minio/minio-go/blob/master/examples/s3/listobjectsV2.go)
* [listincompleteuploads.go](https://github.com/minio/minio-go/blob/master/examples/s3/listincompleteuploads.go)
### 完整示例 : 存储桶策略
* [setbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketpolicy.go)
* [getbucketpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketpolicy.go)
* [listbucketpolicies.go](https://github.com/minio/minio-go/blob/master/examples/s3/listbucketpolicies.go)
### 完整示例 : 存储桶生命周期
* [setbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketlifecycle.go)
* [getbucketlifecycle.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketlifecycle.go)
### 完整示例 : 存储桶加密
* [setbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketencryption.go)
* [getbucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketencryption.go)
* [deletebucketencryption.go](https://github.com/minio/minio-go/blob/master/examples/s3/deletebucketencryption.go)
### 完整示例 : 存储桶复制
* [setbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketreplication.go)
* [getbucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketreplication.go)
* [removebucketreplication.go](https://github.com/minio/minio-go/blob/master/examples/s3/removebucketreplication.go)
### 完整示例 : 存储桶通知
* [setbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/setbucketnotification.go)
* [getbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/getbucketnotification.go)
* [removeallbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeallbucketnotification.go)
* [listenbucketnotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listenbucketnotification.go) (MinIO扩展)
* [listennotification.go](https://github.com/minio/minio-go/blob/master/examples/minio/listen-notification.go) (MinIO 扩展)
### 完整示例 : 操作文件对象
* [fputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject.go)
* [fgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject.go)
* [fputobject-context.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputobject-context.go)
* [fgetobject-context.go](https://github.com/minio/minio-go/blob/master/examples/s3/fgetobject-context.go)
### 完整示例 : 操作对象
* [putobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject.go)
* [getobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject.go)
* [putobject-context.go](https://github.com/minio/minio-go/blob/master/examples/s3/putobject-context.go)
* [getobject-context.go](https://github.com/minio/minio-go/blob/master/examples/s3/getobject-context.go)
* [statobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/statobject.go)
* [copyobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/copyobject.go)
* [removeobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobject.go)
* [removeincompleteupload.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeincompleteupload.go)
* [removeobjects.go](https://github.com/minio/minio-go/blob/master/examples/s3/removeobjects.go)
### 完整示例 : 操作加密对象
* [put-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/put-encrypted-object.go)
* [get-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/get-encrypted-object.go)
* [fput-encrypted-object.go](https://github.com/minio/minio-go/blob/master/examples/s3/fputencrypted-object.go)
### 完整示例 : Presigned操作
* [presignedgetobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedgetobject.go)
* [presignedputobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedputobject.go)
* [presignedheadobject.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedheadobject.go)
* [presignedpostpolicy.go](https://github.com/minio/minio-go/blob/master/examples/s3/presignedpostpolicy.go)
## 了解更多
* [完整文档](https://docs.min.io)
* [MinIO Go Client SDK API文档](https://docs.min.io/docs/golang-client-api-reference)
## 贡献
[贡献指南](https://github.com/minio/minio-go/blob/master/docs/zh_CN/CONTRIBUTING.md)

View File

@ -0,0 +1,134 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/sse"
)
// SetBucketEncryption sets the default encryption configuration on an existing bucket.
func (c *Client) SetBucketEncryption(ctx context.Context, bucketName string, config *sse.Configuration) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if config == nil {
return errInvalidArgument("configuration cannot be empty")
}
buf, err := xml.Marshal(config)
if err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("encryption", "")
// Content-length is mandatory to set a default encryption configuration
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(buf),
contentLength: int64(len(buf)),
contentMD5Base64: sumMD5Base64(buf),
}
// Execute PUT to upload a new bucket default encryption configuration.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// RemoveBucketEncryption removes the default encryption configuration on a bucket with a context to control cancellations and timeouts.
func (c *Client) RemoveBucketEncryption(ctx context.Context, bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("encryption", "")
// DELETE default encryption configuration on a bucket.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// GetBucketEncryption gets the default encryption configuration
// on an existing bucket with a context to control cancellations and timeouts.
func (c *Client) GetBucketEncryption(ctx context.Context, bucketName string) (*sse.Configuration, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("encryption", "")
// Execute GET on bucket to get the default encryption configuration.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, "")
}
encryptionConfig := &sse.Configuration{}
if err = xmlDecoder(resp.Body, encryptionConfig); err != nil {
return nil, err
}
return encryptionConfig, nil
}

View File

@ -0,0 +1,147 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"io/ioutil"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/lifecycle"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketLifecycle set the lifecycle on an existing bucket.
func (c *Client) SetBucketLifecycle(ctx context.Context, bucketName string, config *lifecycle.Configuration) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// If lifecycle is empty then delete it.
if config.Empty() {
return c.removeBucketLifecycle(ctx, bucketName)
}
buf, err := xml.Marshal(config)
if err != nil {
return err
}
// Save the updated lifecycle.
return c.putBucketLifecycle(ctx, bucketName, buf)
}
// Saves a new bucket lifecycle.
func (c *Client) putBucketLifecycle(ctx context.Context, bucketName string, buf []byte) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("lifecycle", "")
// Content-length is mandatory for put lifecycle request
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(buf),
contentLength: int64(len(buf)),
contentMD5Base64: sumMD5Base64(buf),
}
// Execute PUT to upload a new bucket lifecycle.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// Remove lifecycle from a bucket.
func (c *Client) removeBucketLifecycle(ctx context.Context, bucketName string) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("lifecycle", "")
// Execute DELETE on objectName.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
return nil
}
// GetBucketLifecycle fetch bucket lifecycle configuration
func (c *Client) GetBucketLifecycle(ctx context.Context, bucketName string) (*lifecycle.Configuration, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
bucketLifecycle, err := c.getBucketLifecycle(ctx, bucketName)
if err != nil {
return nil, err
}
config := lifecycle.NewConfiguration()
if err = xml.Unmarshal(bucketLifecycle, config); err != nil {
return nil, err
}
return config, nil
}
// Request server for current bucket lifecycle.
func (c *Client) getBucketLifecycle(ctx context.Context, bucketName string) ([]byte, error) {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("lifecycle", "")
// Execute GET on bucket to get lifecycle.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, "")
}
}
return ioutil.ReadAll(resp.Body)
}

View File

@ -0,0 +1,254 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bufio"
"bytes"
"context"
"encoding/xml"
"net/http"
"net/url"
"time"
jsoniter "github.com/json-iterator/go"
"github.com/minio/minio-go/v7/pkg/notification"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketNotification saves a new bucket notification with a context to control cancellations and timeouts.
func (c *Client) SetBucketNotification(ctx context.Context, bucketName string, config notification.Configuration) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("notification", "")
notifBytes, err := xml.Marshal(&config)
if err != nil {
return err
}
notifBuffer := bytes.NewReader(notifBytes)
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: notifBuffer,
contentLength: int64(len(notifBytes)),
contentMD5Base64: sumMD5Base64(notifBytes),
contentSHA256Hex: sum256Hex(notifBytes),
}
// Execute PUT to upload a new bucket notification.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// RemoveAllBucketNotification - Remove bucket notification clears all previously specified config
func (c *Client) RemoveAllBucketNotification(ctx context.Context, bucketName string) error {
return c.SetBucketNotification(ctx, bucketName, notification.Configuration{})
}
// GetBucketNotification returns current bucket notification configuration
func (c *Client) GetBucketNotification(ctx context.Context, bucketName string) (bucketNotification notification.Configuration, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return notification.Configuration{}, err
}
return c.getBucketNotification(ctx, bucketName)
}
// Request server for notification rules.
func (c *Client) getBucketNotification(ctx context.Context, bucketName string) (notification.Configuration, error) {
urlValues := make(url.Values)
urlValues.Set("notification", "")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return notification.Configuration{}, err
}
return processBucketNotificationResponse(bucketName, resp)
}
// processes the GetNotification http response from the server.
func processBucketNotificationResponse(bucketName string, resp *http.Response) (notification.Configuration, error) {
if resp.StatusCode != http.StatusOK {
errResponse := httpRespToErrorResponse(resp, bucketName, "")
return notification.Configuration{}, errResponse
}
var bucketNotification notification.Configuration
err := xmlDecoder(resp.Body, &bucketNotification)
if err != nil {
return notification.Configuration{}, err
}
return bucketNotification, nil
}
// ListenNotification listen for all events, this is a MinIO specific API
func (c *Client) ListenNotification(ctx context.Context, prefix, suffix string, events []string) <-chan notification.Info {
return c.ListenBucketNotification(ctx, "", prefix, suffix, events)
}
// ListenBucketNotification listen for bucket events, this is a MinIO specific API
func (c *Client) ListenBucketNotification(ctx context.Context, bucketName, prefix, suffix string, events []string) <-chan notification.Info {
notificationInfoCh := make(chan notification.Info, 1)
const notificationCapacity = 4 * 1024 * 1024
notificationEventBuffer := make([]byte, notificationCapacity)
// Only success, start a routine to start reading line by line.
go func(notificationInfoCh chan<- notification.Info) {
defer close(notificationInfoCh)
// Validate the bucket name.
if bucketName != "" {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
select {
case notificationInfoCh <- notification.Info{
Err: err,
}:
case <-ctx.Done():
}
return
}
}
// Check ARN partition to verify if listening bucket is supported
if s3utils.IsAmazonEndpoint(*c.endpointURL) || s3utils.IsGoogleEndpoint(*c.endpointURL) {
select {
case notificationInfoCh <- notification.Info{
Err: errAPINotSupported("Listening for bucket notification is specific only to `minio` server endpoints"),
}:
case <-ctx.Done():
}
return
}
// Continuously run and listen on bucket notification.
// Create a done channel to control 'ListObjects' go routine.
retryDoneCh := make(chan struct{}, 1)
// Indicate to our routine to exit cleanly upon return.
defer close(retryDoneCh)
// Prepare urlValues to pass into the request on every loop
urlValues := make(url.Values)
urlValues.Set("prefix", prefix)
urlValues.Set("suffix", suffix)
urlValues["events"] = events
// Wait on the jitter retry loop.
for range c.newRetryTimerContinous(time.Second, time.Second*30, MaxJitter, retryDoneCh) {
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
if err != nil {
select {
case notificationInfoCh <- notification.Info{
Err: err,
}:
case <-ctx.Done():
}
return
}
// Validate http response, upon error return quickly.
if resp.StatusCode != http.StatusOK {
errResponse := httpRespToErrorResponse(resp, bucketName, "")
select {
case notificationInfoCh <- notification.Info{
Err: errResponse,
}:
case <-ctx.Done():
}
return
}
// Initialize a new bufio scanner, to read line by line.
bio := bufio.NewScanner(resp.Body)
// Use a higher buffer to support unexpected
// caching done by proxies
bio.Buffer(notificationEventBuffer, notificationCapacity)
json := jsoniter.ConfigCompatibleWithStandardLibrary
// Unmarshal each line, returns marshaled values.
for bio.Scan() {
var notificationInfo notification.Info
if err = json.Unmarshal(bio.Bytes(), &notificationInfo); err != nil {
// Unexpected error during json unmarshal, send
// the error to caller for actionable as needed.
select {
case notificationInfoCh <- notification.Info{
Err: err,
}:
case <-ctx.Done():
return
}
closeResponse(resp)
continue
}
// Send notificationInfo
select {
case notificationInfoCh <- notificationInfo:
case <-ctx.Done():
closeResponse(resp)
return
}
}
if err = bio.Err(); err != nil {
select {
case notificationInfoCh <- notification.Info{
Err: err,
}:
case <-ctx.Done():
return
}
}
// Close current connection before looping further.
closeResponse(resp)
}
}(notificationInfoCh)
// Returns the notification info channel, for caller to start reading from.
return notificationInfoCh
}

View File

@ -0,0 +1,147 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketPolicy sets the access permissions on an existing bucket.
func (c *Client) SetBucketPolicy(ctx context.Context, bucketName, policy string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// If policy is empty then delete the bucket policy.
if policy == "" {
return c.removeBucketPolicy(ctx, bucketName)
}
// Save the updated policies.
return c.putBucketPolicy(ctx, bucketName, policy)
}
// Saves a new bucket policy.
func (c *Client) putBucketPolicy(ctx context.Context, bucketName, policy string) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: strings.NewReader(policy),
contentLength: int64(len(policy)),
}
// Execute PUT to upload a new bucket policy.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent && resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// Removes all policies on a bucket.
func (c *Client) removeBucketPolicy(ctx context.Context, bucketName string) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
// Execute DELETE on objectName.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// GetBucketPolicy returns the current policy
func (c *Client) GetBucketPolicy(ctx context.Context, bucketName string) (string, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
bucketPolicy, err := c.getBucketPolicy(ctx, bucketName)
if err != nil {
errResponse := ToErrorResponse(err)
if errResponse.Code == "NoSuchBucketPolicy" {
return "", nil
}
return "", err
}
return bucketPolicy, nil
}
// Request server for current bucket policy.
func (c *Client) getBucketPolicy(ctx context.Context, bucketName string) (string, error) {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("policy", "")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return "", err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return "", httpRespToErrorResponse(resp, bucketName, "")
}
}
bucketPolicyBuf, err := ioutil.ReadAll(resp.Body)
if err != nil {
return "", err
}
policy := string(bucketPolicyBuf)
return policy, err
}

View File

@ -0,0 +1,291 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/json"
"encoding/xml"
"io/ioutil"
"net/http"
"net/url"
"time"
"github.com/google/uuid"
"github.com/minio/minio-go/v7/pkg/replication"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// RemoveBucketReplication removes a replication config on an existing bucket.
func (c *Client) RemoveBucketReplication(ctx context.Context, bucketName string) error {
return c.removeBucketReplication(ctx, bucketName)
}
// SetBucketReplication sets a replication config on an existing bucket.
func (c *Client) SetBucketReplication(ctx context.Context, bucketName string, cfg replication.Config) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// If replication is empty then delete it.
if cfg.Empty() {
return c.removeBucketReplication(ctx, bucketName)
}
// Save the updated replication.
return c.putBucketReplication(ctx, bucketName, cfg)
}
// Saves a new bucket replication.
func (c *Client) putBucketReplication(ctx context.Context, bucketName string, cfg replication.Config) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication", "")
replication, err := xml.Marshal(cfg)
if err != nil {
return err
}
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(replication),
contentLength: int64(len(replication)),
contentMD5Base64: sumMD5Base64(replication),
}
// Execute PUT to upload a new bucket replication config.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// Remove replication from a bucket.
func (c *Client) removeBucketReplication(ctx context.Context, bucketName string) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication", "")
// Execute DELETE on objectName.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// GetBucketReplication fetches bucket replication configuration.If config is not
// found, returns empty config with nil error.
func (c *Client) GetBucketReplication(ctx context.Context, bucketName string) (cfg replication.Config, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return cfg, err
}
bucketReplicationCfg, err := c.getBucketReplication(ctx, bucketName)
if err != nil {
errResponse := ToErrorResponse(err)
if errResponse.Code == "ReplicationConfigurationNotFoundError" {
return cfg, nil
}
return cfg, err
}
return bucketReplicationCfg, nil
}
// Request server for current bucket replication config.
func (c *Client) getBucketReplication(ctx context.Context, bucketName string) (cfg replication.Config, err error) {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication", "")
// Execute GET on bucket to get replication config.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return cfg, err
}
if resp.StatusCode != http.StatusOK {
return cfg, httpRespToErrorResponse(resp, bucketName, "")
}
if err = xmlDecoder(resp.Body, &cfg); err != nil {
return cfg, err
}
return cfg, nil
}
// GetBucketReplicationMetrics fetches bucket replication status metrics
func (c *Client) GetBucketReplicationMetrics(ctx context.Context, bucketName string) (s replication.Metrics, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return s, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication-metrics", "")
// Execute GET on bucket to get replication config.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return s, err
}
if resp.StatusCode != http.StatusOK {
return s, httpRespToErrorResponse(resp, bucketName, "")
}
respBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
return s, err
}
if err := json.Unmarshal(respBytes, &s); err != nil {
return s, err
}
return s, nil
}
// mustGetUUID - get a random UUID.
func mustGetUUID() string {
u, err := uuid.NewRandom()
if err != nil {
return ""
}
return u.String()
}
// ResetBucketReplication kicks off replication of previously replicated objects if ExistingObjectReplication
// is enabled in the replication config
func (c *Client) ResetBucketReplication(ctx context.Context, bucketName string, olderThan time.Duration) (rID string, err error) {
rID = mustGetUUID()
_, err = c.resetBucketReplicationOnTarget(ctx, bucketName, olderThan, "", rID)
if err != nil {
return rID, err
}
return rID, nil
}
// ResetBucketReplicationOnTarget kicks off replication of previously replicated objects if
// ExistingObjectReplication is enabled in the replication config
func (c *Client) ResetBucketReplicationOnTarget(ctx context.Context, bucketName string, olderThan time.Duration, tgtArn string) (replication.ResyncTargetsInfo, error) {
return c.resetBucketReplicationOnTarget(ctx, bucketName, olderThan, tgtArn, mustGetUUID())
}
// ResetBucketReplication kicks off replication of previously replicated objects if ExistingObjectReplication
// is enabled in the replication config
func (c *Client) resetBucketReplicationOnTarget(ctx context.Context, bucketName string, olderThan time.Duration, tgtArn string, resetID string) (rinfo replication.ResyncTargetsInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication-reset", "")
if olderThan > 0 {
urlValues.Set("older-than", olderThan.String())
}
if tgtArn != "" {
urlValues.Set("arn", tgtArn)
}
urlValues.Set("reset-id", resetID)
// Execute GET on bucket to get replication config.
resp, err := c.executeMethod(ctx, http.MethodPut, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return rinfo, err
}
if resp.StatusCode != http.StatusOK {
return rinfo, httpRespToErrorResponse(resp, bucketName, "")
}
if err = json.NewDecoder(resp.Body).Decode(&rinfo); err != nil {
return rinfo, err
}
return rinfo, nil
}
// GetBucketReplicationResyncStatus gets the status of replication resync
func (c *Client) GetBucketReplicationResyncStatus(ctx context.Context, bucketName, arn string) (rinfo replication.ResyncTargetsInfo, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return rinfo, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("replication-reset-status", "")
if arn != "" {
urlValues.Set("arn", arn)
}
// Execute GET on bucket to get replication config.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return rinfo, err
}
if resp.StatusCode != http.StatusOK {
return rinfo, httpRespToErrorResponse(resp, bucketName, "")
}
if err = json.NewDecoder(resp.Body).Decode(&rinfo); err != nil {
return rinfo, err
}
return rinfo, nil
}

View File

@ -0,0 +1,135 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"errors"
"io"
"io/ioutil"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/tags"
)
// GetBucketTagging fetch tagging configuration for a bucket with a
// context to control cancellations and timeouts.
func (c *Client) GetBucketTagging(ctx context.Context, bucketName string) (*tags.Tags, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
// Execute GET on bucket to get tagging configuration.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, "")
}
defer io.Copy(ioutil.Discard, resp.Body)
return tags.ParseBucketXML(resp.Body)
}
// SetBucketTagging sets tagging configuration for a bucket
// with a context to control cancellations and timeouts.
func (c *Client) SetBucketTagging(ctx context.Context, bucketName string, tags *tags.Tags) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if tags == nil {
return errors.New("nil tags passed")
}
buf, err := xml.Marshal(tags)
if err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
// Content-length is mandatory to set a default encryption configuration
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(buf),
contentLength: int64(len(buf)),
contentMD5Base64: sumMD5Base64(buf),
}
// Execute PUT on bucket to put tagging configuration.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}
// RemoveBucketTagging removes tagging configuration for a
// bucket with a context to control cancellations and timeouts.
func (c *Client) RemoveBucketTagging(ctx context.Context, bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
// Execute DELETE on bucket to remove tagging configuration.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}

View File

@ -0,0 +1,146 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// SetBucketVersioning sets a bucket versioning configuration
func (c *Client) SetBucketVersioning(ctx context.Context, bucketName string, config BucketVersioningConfiguration) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
buf, err := xml.Marshal(config)
if err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("versioning", "")
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(buf),
contentLength: int64(len(buf)),
contentMD5Base64: sumMD5Base64(buf),
contentSHA256Hex: sum256Hex(buf),
}
// Execute PUT to set a bucket versioning.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// EnableVersioning - enable object versioning in given bucket.
func (c *Client) EnableVersioning(ctx context.Context, bucketName string) error {
return c.SetBucketVersioning(ctx, bucketName, BucketVersioningConfiguration{Status: "Enabled"})
}
// SuspendVersioning - suspend object versioning in given bucket.
func (c *Client) SuspendVersioning(ctx context.Context, bucketName string) error {
return c.SetBucketVersioning(ctx, bucketName, BucketVersioningConfiguration{Status: "Suspended"})
}
// ExcludedPrefix - holds individual prefixes excluded from being versioned.
type ExcludedPrefix struct {
Prefix string
}
// BucketVersioningConfiguration is the versioning configuration structure
type BucketVersioningConfiguration struct {
XMLName xml.Name `xml:"VersioningConfiguration"`
Status string `xml:"Status"`
MFADelete string `xml:"MfaDelete,omitempty"`
// MinIO extension - allows selective, prefix-level versioning exclusion.
// Requires versioning to be enabled
ExcludedPrefixes []ExcludedPrefix `xml:",omitempty"`
ExcludeFolders bool `xml:",omitempty"`
}
// Various supported states
const (
Enabled = "Enabled"
// Disabled State = "Disabled" only used by MFA Delete not supported yet.
Suspended = "Suspended"
)
// Enabled returns true if bucket versioning is enabled
func (b BucketVersioningConfiguration) Enabled() bool {
return b.Status == Enabled
}
// Suspended returns true if bucket versioning is suspended
func (b BucketVersioningConfiguration) Suspended() bool {
return b.Status == Suspended
}
// GetBucketVersioning gets the versioning configuration on
// an existing bucket with a context to control cancellations and timeouts.
func (c *Client) GetBucketVersioning(ctx context.Context, bucketName string) (BucketVersioningConfiguration, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return BucketVersioningConfiguration{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("versioning", "")
// Execute GET on bucket to get the versioning configuration.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return BucketVersioningConfiguration{}, err
}
if resp.StatusCode != http.StatusOK {
return BucketVersioningConfiguration{}, httpRespToErrorResponse(resp, bucketName, "")
}
versioningConfig := BucketVersioningConfiguration{}
if err = xmlDecoder(resp.Body, &versioningConfig); err != nil {
return versioningConfig, err
}
return versioningConfig, nil
}

View File

@ -0,0 +1,592 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017, 2018 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/google/uuid"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// CopyDestOptions represents options specified by user for CopyObject/ComposeObject APIs
type CopyDestOptions struct {
Bucket string // points to destination bucket
Object string // points to destination object
// `Encryption` is the key info for server-side-encryption with customer
// provided key. If it is nil, no encryption is performed.
Encryption encrypt.ServerSide
// `userMeta` is the user-metadata key-value pairs to be set on the
// destination. The keys are automatically prefixed with `x-amz-meta-`
// if needed. If nil is passed, and if only a single source (of any
// size) is provided in the ComposeObject call, then metadata from the
// source is copied to the destination.
// if no user-metadata is provided, it is copied from source
// (when there is only once source object in the compose
// request)
UserMetadata map[string]string
// UserMetadata is only set to destination if ReplaceMetadata is true
// other value is UserMetadata is ignored and we preserve src.UserMetadata
// NOTE: if you set this value to true and now metadata is present
// in UserMetadata your destination object will not have any metadata
// set.
ReplaceMetadata bool
// `userTags` is the user defined object tags to be set on destination.
// This will be set only if the `replaceTags` field is set to true.
// Otherwise this field is ignored
UserTags map[string]string
ReplaceTags bool
// Specifies whether you want to apply a Legal Hold to the copied object.
LegalHold LegalHoldStatus
// Object Retention related fields
Mode RetentionMode
RetainUntilDate time.Time
Size int64 // Needs to be specified if progress bar is specified.
// Progress of the entire copy operation will be sent here.
Progress io.Reader
}
// Process custom-metadata to remove a `x-amz-meta-` prefix if
// present and validate that keys are distinct (after this
// prefix removal).
func filterCustomMeta(userMeta map[string]string) map[string]string {
m := make(map[string]string)
for k, v := range userMeta {
if strings.HasPrefix(strings.ToLower(k), "x-amz-meta-") {
k = k[len("x-amz-meta-"):]
}
if _, ok := m[k]; ok {
continue
}
m[k] = v
}
return m
}
// Marshal converts all the CopyDestOptions into their
// equivalent HTTP header representation
func (opts CopyDestOptions) Marshal(header http.Header) {
const replaceDirective = "REPLACE"
if opts.ReplaceTags {
header.Set(amzTaggingHeaderDirective, replaceDirective)
if tags := s3utils.TagEncode(opts.UserTags); tags != "" {
header.Set(amzTaggingHeader, tags)
}
}
if opts.LegalHold != LegalHoldStatus("") {
header.Set(amzLegalHoldHeader, opts.LegalHold.String())
}
if opts.Mode != RetentionMode("") && !opts.RetainUntilDate.IsZero() {
header.Set(amzLockMode, opts.Mode.String())
header.Set(amzLockRetainUntil, opts.RetainUntilDate.Format(time.RFC3339))
}
if opts.Encryption != nil {
opts.Encryption.Marshal(header)
}
if opts.ReplaceMetadata {
header.Set("x-amz-metadata-directive", replaceDirective)
for k, v := range filterCustomMeta(opts.UserMetadata) {
if isAmzHeader(k) || isStandardHeader(k) || isStorageClassHeader(k) {
header.Set(k, v)
} else {
header.Set("x-amz-meta-"+k, v)
}
}
}
}
// toDestinationInfo returns a validated copyOptions object.
func (opts CopyDestOptions) validate() (err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(opts.Bucket); err != nil {
return err
}
if err = s3utils.CheckValidObjectName(opts.Object); err != nil {
return err
}
if opts.Progress != nil && opts.Size < 0 {
return errInvalidArgument("For progress bar effective size needs to be specified")
}
return nil
}
// CopySrcOptions represents a source object to be copied, using
// server-side copying APIs.
type CopySrcOptions struct {
Bucket, Object string
VersionID string
MatchETag string
NoMatchETag string
MatchModifiedSince time.Time
MatchUnmodifiedSince time.Time
MatchRange bool
Start, End int64
Encryption encrypt.ServerSide
}
// Marshal converts all the CopySrcOptions into their
// equivalent HTTP header representation
func (opts CopySrcOptions) Marshal(header http.Header) {
// Set the source header
header.Set("x-amz-copy-source", s3utils.EncodePath(opts.Bucket+"/"+opts.Object))
if opts.VersionID != "" {
header.Set("x-amz-copy-source", s3utils.EncodePath(opts.Bucket+"/"+opts.Object)+"?versionId="+opts.VersionID)
}
if opts.MatchETag != "" {
header.Set("x-amz-copy-source-if-match", opts.MatchETag)
}
if opts.NoMatchETag != "" {
header.Set("x-amz-copy-source-if-none-match", opts.NoMatchETag)
}
if !opts.MatchModifiedSince.IsZero() {
header.Set("x-amz-copy-source-if-modified-since", opts.MatchModifiedSince.Format(http.TimeFormat))
}
if !opts.MatchUnmodifiedSince.IsZero() {
header.Set("x-amz-copy-source-if-unmodified-since", opts.MatchUnmodifiedSince.Format(http.TimeFormat))
}
if opts.Encryption != nil {
encrypt.SSECopy(opts.Encryption).Marshal(header)
}
}
func (opts CopySrcOptions) validate() (err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(opts.Bucket); err != nil {
return err
}
if err = s3utils.CheckValidObjectName(opts.Object); err != nil {
return err
}
if opts.Start > opts.End || opts.Start < 0 {
return errInvalidArgument("start must be non-negative, and start must be at most end.")
}
return nil
}
// Low level implementation of CopyObject API, supports only upto 5GiB worth of copy.
func (c *Client) copyObjectDo(ctx context.Context, srcBucket, srcObject, destBucket, destObject string,
metadata map[string]string, srcOpts CopySrcOptions, dstOpts PutObjectOptions,
) (ObjectInfo, error) {
// Build headers.
headers := make(http.Header)
// Set all the metadata headers.
for k, v := range metadata {
headers.Set(k, v)
}
if !dstOpts.Internal.ReplicationStatus.Empty() {
headers.Set(amzBucketReplicationStatus, string(dstOpts.Internal.ReplicationStatus))
}
if !dstOpts.Internal.SourceMTime.IsZero() {
headers.Set(minIOBucketSourceMTime, dstOpts.Internal.SourceMTime.Format(time.RFC3339Nano))
}
if dstOpts.Internal.SourceETag != "" {
headers.Set(minIOBucketSourceETag, dstOpts.Internal.SourceETag)
}
if dstOpts.Internal.ReplicationRequest {
headers.Set(minIOBucketReplicationRequest, "")
}
if !dstOpts.Internal.LegalholdTimestamp.IsZero() {
headers.Set(minIOBucketReplicationObjectLegalHoldTimestamp, dstOpts.Internal.LegalholdTimestamp.Format(time.RFC3339Nano))
}
if !dstOpts.Internal.RetentionTimestamp.IsZero() {
headers.Set(minIOBucketReplicationObjectRetentionTimestamp, dstOpts.Internal.RetentionTimestamp.Format(time.RFC3339Nano))
}
if !dstOpts.Internal.TaggingTimestamp.IsZero() {
headers.Set(minIOBucketReplicationTaggingTimestamp, dstOpts.Internal.TaggingTimestamp.Format(time.RFC3339Nano))
}
if len(dstOpts.UserTags) != 0 {
headers.Set(amzTaggingHeader, s3utils.TagEncode(dstOpts.UserTags))
}
reqMetadata := requestMetadata{
bucketName: destBucket,
objectName: destObject,
customHeader: headers,
}
if dstOpts.Internal.SourceVersionID != "" {
if dstOpts.Internal.SourceVersionID != nullVersionID {
if _, err := uuid.Parse(dstOpts.Internal.SourceVersionID); err != nil {
return ObjectInfo{}, errInvalidArgument(err.Error())
}
}
urlValues := make(url.Values)
urlValues.Set("versionId", dstOpts.Internal.SourceVersionID)
reqMetadata.queryValues = urlValues
}
// Set the source header
headers.Set("x-amz-copy-source", s3utils.EncodePath(srcBucket+"/"+srcObject))
if srcOpts.VersionID != "" {
headers.Set("x-amz-copy-source", s3utils.EncodePath(srcBucket+"/"+srcObject)+"?versionId="+srcOpts.VersionID)
}
// Send upload-part-copy request
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return ObjectInfo{}, err
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return ObjectInfo{}, httpRespToErrorResponse(resp, srcBucket, srcObject)
}
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return ObjectInfo{}, err
}
objInfo := ObjectInfo{
Key: destObject,
ETag: strings.Trim(cpObjRes.ETag, "\""),
LastModified: cpObjRes.LastModified,
}
return objInfo, nil
}
func (c *Client) copyObjectPartDo(ctx context.Context, srcBucket, srcObject, destBucket, destObject string, uploadID string,
partID int, startOffset int64, length int64, metadata map[string]string,
) (p CompletePart, err error) {
headers := make(http.Header)
// Set source
headers.Set("x-amz-copy-source", s3utils.EncodePath(srcBucket+"/"+srcObject))
if startOffset < 0 {
return p, errInvalidArgument("startOffset must be non-negative")
}
if length >= 0 {
headers.Set("x-amz-copy-source-range", fmt.Sprintf("bytes=%d-%d", startOffset, startOffset+length-1))
}
for k, v := range metadata {
headers.Set(k, v)
}
queryValues := make(url.Values)
queryValues.Set("partNumber", strconv.Itoa(partID))
queryValues.Set("uploadId", uploadID)
resp, err := c.executeMethod(ctx, http.MethodPut, requestMetadata{
bucketName: destBucket,
objectName: destObject,
customHeader: headers,
queryValues: queryValues,
})
defer closeResponse(resp)
if err != nil {
return
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return p, httpRespToErrorResponse(resp, destBucket, destObject)
}
// Decode copy-part response on success.
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return p, err
}
p.PartNumber, p.ETag = partID, cpObjRes.ETag
return p, nil
}
// uploadPartCopy - helper function to create a part in a multipart
// upload via an upload-part-copy request
// https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPartCopy.html
func (c *Client) uploadPartCopy(ctx context.Context, bucket, object, uploadID string, partNumber int,
headers http.Header,
) (p CompletePart, err error) {
// Build query parameters
urlValues := make(url.Values)
urlValues.Set("partNumber", strconv.Itoa(partNumber))
urlValues.Set("uploadId", uploadID)
// Send upload-part-copy request
resp, err := c.executeMethod(ctx, http.MethodPut, requestMetadata{
bucketName: bucket,
objectName: object,
customHeader: headers,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return p, err
}
// Check if we got an error response.
if resp.StatusCode != http.StatusOK {
return p, httpRespToErrorResponse(resp, bucket, object)
}
// Decode copy-part response on success.
cpObjRes := copyObjectResult{}
err = xmlDecoder(resp.Body, &cpObjRes)
if err != nil {
return p, err
}
p.PartNumber, p.ETag = partNumber, cpObjRes.ETag
return p, nil
}
// ComposeObject - creates an object using server-side copying
// of existing objects. It takes a list of source objects (with optional offsets)
// and concatenates them into a new object using only server-side copying
// operations. Optionally takes progress reader hook for applications to
// look at current progress.
func (c *Client) ComposeObject(ctx context.Context, dst CopyDestOptions, srcs ...CopySrcOptions) (UploadInfo, error) {
if len(srcs) < 1 || len(srcs) > maxPartsCount {
return UploadInfo{}, errInvalidArgument("There must be as least one and up to 10000 source objects.")
}
for _, src := range srcs {
if err := src.validate(); err != nil {
return UploadInfo{}, err
}
}
if err := dst.validate(); err != nil {
return UploadInfo{}, err
}
srcObjectInfos := make([]ObjectInfo, len(srcs))
srcObjectSizes := make([]int64, len(srcs))
var totalSize, totalParts int64
var err error
for i, src := range srcs {
opts := StatObjectOptions{ServerSideEncryption: encrypt.SSE(src.Encryption), VersionID: src.VersionID}
srcObjectInfos[i], err = c.StatObject(context.Background(), src.Bucket, src.Object, opts)
if err != nil {
return UploadInfo{}, err
}
srcCopySize := srcObjectInfos[i].Size
// Check if a segment is specified, and if so, is the
// segment within object bounds?
if src.MatchRange {
// Since range is specified,
// 0 <= src.start <= src.end
// so only invalid case to check is:
if src.End >= srcCopySize || src.Start < 0 {
return UploadInfo{}, errInvalidArgument(
fmt.Sprintf("CopySrcOptions %d has invalid segment-to-copy [%d, %d] (size is %d)",
i, src.Start, src.End, srcCopySize))
}
srcCopySize = src.End - src.Start + 1
}
// Only the last source may be less than `absMinPartSize`
if srcCopySize < absMinPartSize && i < len(srcs)-1 {
return UploadInfo{}, errInvalidArgument(
fmt.Sprintf("CopySrcOptions %d is too small (%d) and it is not the last part", i, srcCopySize))
}
// Is data to copy too large?
totalSize += srcCopySize
if totalSize > maxMultipartPutObjectSize {
return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Cannot compose an object of size %d (> 5TiB)", totalSize))
}
// record source size
srcObjectSizes[i] = srcCopySize
// calculate parts needed for current source
totalParts += partsRequired(srcCopySize)
// Do we need more parts than we are allowed?
if totalParts > maxPartsCount {
return UploadInfo{}, errInvalidArgument(fmt.Sprintf(
"Your proposed compose object requires more than %d parts", maxPartsCount))
}
}
// Single source object case (i.e. when only one source is
// involved, it is being copied wholly and at most 5GiB in
// size, emptyfiles are also supported).
if (totalParts == 1 && srcs[0].Start == -1 && totalSize <= maxPartSize) || (totalSize == 0) {
return c.CopyObject(ctx, dst, srcs[0])
}
// Now, handle multipart-copy cases.
// 1. Ensure that the object has not been changed while
// we are copying data.
for i, src := range srcs {
src.MatchETag = srcObjectInfos[i].ETag
}
// 2. Initiate a new multipart upload.
// Set user-metadata on the destination object. If no
// user-metadata is specified, and there is only one source,
// (only) then metadata from source is copied.
var userMeta map[string]string
if dst.ReplaceMetadata {
userMeta = dst.UserMetadata
} else {
userMeta = srcObjectInfos[0].UserMetadata
}
var userTags map[string]string
if dst.ReplaceTags {
userTags = dst.UserTags
} else {
userTags = srcObjectInfos[0].UserTags
}
uploadID, err := c.newUploadID(ctx, dst.Bucket, dst.Object, PutObjectOptions{
ServerSideEncryption: dst.Encryption,
UserMetadata: userMeta,
UserTags: userTags,
Mode: dst.Mode,
RetainUntilDate: dst.RetainUntilDate,
LegalHold: dst.LegalHold,
})
if err != nil {
return UploadInfo{}, err
}
// 3. Perform copy part uploads
objParts := []CompletePart{}
partIndex := 1
for i, src := range srcs {
h := make(http.Header)
src.Marshal(h)
if dst.Encryption != nil && dst.Encryption.Type() == encrypt.SSEC {
dst.Encryption.Marshal(h)
}
// calculate start/end indices of parts after
// splitting.
startIdx, endIdx := calculateEvenSplits(srcObjectSizes[i], src)
for j, start := range startIdx {
end := endIdx[j]
// Add (or reset) source range header for
// upload part copy request.
h.Set("x-amz-copy-source-range",
fmt.Sprintf("bytes=%d-%d", start, end))
// make upload-part-copy request
complPart, err := c.uploadPartCopy(ctx, dst.Bucket,
dst.Object, uploadID, partIndex, h)
if err != nil {
return UploadInfo{}, err
}
if dst.Progress != nil {
io.CopyN(ioutil.Discard, dst.Progress, end-start+1)
}
objParts = append(objParts, complPart)
partIndex++
}
}
// 4. Make final complete-multipart request.
uploadInfo, err := c.completeMultipartUpload(ctx, dst.Bucket, dst.Object, uploadID,
completeMultipartUpload{Parts: objParts}, PutObjectOptions{})
if err != nil {
return UploadInfo{}, err
}
uploadInfo.Size = totalSize
return uploadInfo, nil
}
// partsRequired is maximum parts possible with
// max part size of ceiling(maxMultipartPutObjectSize / (maxPartsCount - 1))
func partsRequired(size int64) int64 {
maxPartSize := maxMultipartPutObjectSize / (maxPartsCount - 1)
r := size / int64(maxPartSize)
if size%int64(maxPartSize) > 0 {
r++
}
return r
}
// calculateEvenSplits - computes splits for a source and returns
// start and end index slices. Splits happen evenly to be sure that no
// part is less than 5MiB, as that could fail the multipart request if
// it is not the last part.
func calculateEvenSplits(size int64, src CopySrcOptions) (startIndex, endIndex []int64) {
if size == 0 {
return
}
reqParts := partsRequired(size)
startIndex = make([]int64, reqParts)
endIndex = make([]int64, reqParts)
// Compute number of required parts `k`, as:
//
// k = ceiling(size / copyPartSize)
//
// Now, distribute the `size` bytes in the source into
// k parts as evenly as possible:
//
// r parts sized (q+1) bytes, and
// (k - r) parts sized q bytes, where
//
// size = q * k + r (by simple division of size by k,
// so that 0 <= r < k)
//
start := src.Start
if start == -1 {
start = 0
}
quot, rem := size/reqParts, size%reqParts
nextStart := start
for j := int64(0); j < reqParts; j++ {
curPartSize := quot
if j < rem {
curPartSize++
}
cStart := nextStart
cEnd := cStart + curPartSize - 1
nextStart = cEnd + 1
startIndex[j], endIndex[j] = cStart, cEnd
}
return
}

77
vendor/github.com/minio/minio-go/v7/api-copy-object.go generated vendored Normal file
View File

@ -0,0 +1,77 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017, 2018 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"io/ioutil"
"net/http"
)
// CopyObject - copy a source object into a new object
func (c *Client) CopyObject(ctx context.Context, dst CopyDestOptions, src CopySrcOptions) (UploadInfo, error) {
if err := src.validate(); err != nil {
return UploadInfo{}, err
}
if err := dst.validate(); err != nil {
return UploadInfo{}, err
}
header := make(http.Header)
dst.Marshal(header)
src.Marshal(header)
resp, err := c.executeMethod(ctx, http.MethodPut, requestMetadata{
bucketName: dst.Bucket,
objectName: dst.Object,
customHeader: header,
})
if err != nil {
return UploadInfo{}, err
}
defer closeResponse(resp)
if resp.StatusCode != http.StatusOK {
return UploadInfo{}, httpRespToErrorResponse(resp, dst.Bucket, dst.Object)
}
// Update the progress properly after successful copy.
if dst.Progress != nil {
io.Copy(ioutil.Discard, io.LimitReader(dst.Progress, dst.Size))
}
cpObjRes := copyObjectResult{}
if err = xmlDecoder(resp.Body, &cpObjRes); err != nil {
return UploadInfo{}, err
}
// extract lifecycle expiry date and rule ID
expTime, ruleID := amzExpirationToExpiryDateRuleID(resp.Header.Get(amzExpiration))
return UploadInfo{
Bucket: dst.Bucket,
Key: dst.Object,
LastModified: cpObjRes.LastModified,
ETag: trimEtag(resp.Header.Get("ETag")),
VersionID: resp.Header.Get(amzVersionID),
Expiration: expTime,
ExpirationRuleID: ruleID,
}, nil
}

177
vendor/github.com/minio/minio-go/v7/api-datatypes.go generated vendored Normal file
View File

@ -0,0 +1,177 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/xml"
"io"
"net/http"
"time"
)
// BucketInfo container for bucket metadata.
type BucketInfo struct {
// The name of the bucket.
Name string `json:"name"`
// Date the bucket was created.
CreationDate time.Time `json:"creationDate"`
}
// StringMap represents map with custom UnmarshalXML
type StringMap map[string]string
// UnmarshalXML unmarshals the XML into a map of string to strings,
// creating a key in the map for each tag and setting it's value to the
// tags contents.
//
// The fact this function is on the pointer of Map is important, so that
// if m is nil it can be initialized, which is often the case if m is
// nested in another xml structural. This is also why the first thing done
// on the first line is initialize it.
func (m *StringMap) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
*m = StringMap{}
type xmlMapEntry struct {
XMLName xml.Name
Value string `xml:",chardata"`
}
for {
var e xmlMapEntry
err := d.Decode(&e)
if err == io.EOF {
break
} else if err != nil {
return err
}
(*m)[e.XMLName.Local] = e.Value
}
return nil
}
// Owner name.
type Owner struct {
XMLName xml.Name `xml:"Owner" json:"owner"`
DisplayName string `xml:"ID" json:"name"`
ID string `xml:"DisplayName" json:"id"`
}
// UploadInfo contains information about the
// newly uploaded or copied object.
type UploadInfo struct {
Bucket string
Key string
ETag string
Size int64
LastModified time.Time
Location string
VersionID string
// Lifecycle expiry-date and ruleID associated with the expiry
// not to be confused with `Expires` HTTP header.
Expiration time.Time
ExpirationRuleID string
}
// RestoreInfo contains information of the restore operation of an archived object
type RestoreInfo struct {
// Is the restoring operation is still ongoing
OngoingRestore bool
// When the restored copy of the archived object will be removed
ExpiryTime time.Time
}
// ObjectInfo container for object metadata.
type ObjectInfo struct {
// An ETag is optionally set to md5sum of an object. In case of multipart objects,
// ETag is of the form MD5SUM-N where MD5SUM is md5sum of all individual md5sums of
// each parts concatenated into one string.
ETag string `json:"etag"`
Key string `json:"name"` // Name of the object
LastModified time.Time `json:"lastModified"` // Date and time the object was last modified.
Size int64 `json:"size"` // Size in bytes of the object.
ContentType string `json:"contentType"` // A standard MIME type describing the format of the object data.
Expires time.Time `json:"expires"` // The date and time at which the object is no longer able to be cached.
// Collection of additional metadata on the object.
// eg: x-amz-meta-*, content-encoding etc.
Metadata http.Header `json:"metadata" xml:"-"`
// x-amz-meta-* headers stripped "x-amz-meta-" prefix containing the first value.
UserMetadata StringMap `json:"userMetadata"`
// x-amz-tagging values in their k/v values.
UserTags map[string]string `json:"userTags"`
// x-amz-tagging-count value
UserTagCount int
// Owner name.
Owner Owner
// ACL grant.
Grant []Grant
// The class of storage used to store the object.
StorageClass string `json:"storageClass"`
// Versioning related information
IsLatest bool
IsDeleteMarker bool
VersionID string `xml:"VersionId"`
// x-amz-replication-status value is either in one of the following states
// - COMPLETED
// - PENDING
// - FAILED
// - REPLICA (on the destination)
ReplicationStatus string `xml:"ReplicationStatus"`
// Lifecycle expiry-date and ruleID associated with the expiry
// not to be confused with `Expires` HTTP header.
Expiration time.Time
ExpirationRuleID string
Restore *RestoreInfo
// Error
Err error `json:"-"`
}
// ObjectMultipartInfo container for multipart object metadata.
type ObjectMultipartInfo struct {
// Date and time at which the multipart upload was initiated.
Initiated time.Time `type:"timestamp" timestampFormat:"iso8601"`
Initiator initiator
Owner owner
// The type of storage to use for the object. Defaults to 'STANDARD'.
StorageClass string
// Key of the object for which the multipart upload was initiated.
Key string
// Size in bytes of the object.
Size int64
// Upload ID that identifies the multipart upload.
UploadID string `xml:"UploadId"`
// Error
Err error
}

View File

@ -0,0 +1,295 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"encoding/xml"
"fmt"
"io"
"io/ioutil"
"net/http"
)
/* **** SAMPLE ERROR RESPONSE ****
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<BucketName>bucketName</BucketName>
<Key>objectName</Key>
<RequestId>F19772218238A85A</RequestId>
<HostId>GuWkjyviSiGHizehqpmsD1ndz5NClSP19DOT+s2mv7gXGQ8/X1lhbDGiIJEXpGFD</HostId>
</Error>
*/
// ErrorResponse - Is the typed error returned by all API operations.
// ErrorResponse struct should be comparable since it is compared inside
// golang http API (https://github.com/golang/go/issues/29768)
type ErrorResponse struct {
XMLName xml.Name `xml:"Error" json:"-"`
Code string
Message string
BucketName string
Key string
Resource string
RequestID string `xml:"RequestId"`
HostID string `xml:"HostId"`
// Region where the bucket is located. This header is returned
// only in HEAD bucket and ListObjects response.
Region string
// Captures the server string returned in response header.
Server string
// Underlying HTTP status code for the returned error
StatusCode int `xml:"-" json:"-"`
}
// ToErrorResponse - Returns parsed ErrorResponse struct from body and
// http headers.
//
// For example:
//
// import s3 "github.com/minio/minio-go/v7"
// ...
// ...
// reader, stat, err := s3.GetObject(...)
// if err != nil {
// resp := s3.ToErrorResponse(err)
// }
// ...
func ToErrorResponse(err error) ErrorResponse {
switch err := err.(type) {
case ErrorResponse:
return err
default:
return ErrorResponse{}
}
}
// Error - Returns S3 error string.
func (e ErrorResponse) Error() string {
if e.Message == "" {
msg, ok := s3ErrorResponseMap[e.Code]
if !ok {
msg = fmt.Sprintf("Error response code %s.", e.Code)
}
return msg
}
return e.Message
}
// Common string for errors to report issue location in unexpected
// cases.
const (
reportIssue = "Please report this issue at https://github.com/minio/minio-go/issues."
)
// xmlDecodeAndBody reads the whole body up to 1MB and
// tries to XML decode it into v.
// The body that was read and any error from reading or decoding is returned.
func xmlDecodeAndBody(bodyReader io.Reader, v interface{}) ([]byte, error) {
// read the whole body (up to 1MB)
const maxBodyLength = 1 << 20
body, err := ioutil.ReadAll(io.LimitReader(bodyReader, maxBodyLength))
if err != nil {
return nil, err
}
return bytes.TrimSpace(body), xmlDecoder(bytes.NewReader(body), v)
}
// httpRespToErrorResponse returns a new encoded ErrorResponse
// structure as error.
func httpRespToErrorResponse(resp *http.Response, bucketName, objectName string) error {
if resp == nil {
msg := "Empty http response. " + reportIssue
return errInvalidArgument(msg)
}
errResp := ErrorResponse{
StatusCode: resp.StatusCode,
Server: resp.Header.Get("Server"),
}
errBody, err := xmlDecodeAndBody(resp.Body, &errResp)
// Xml decoding failed with no body, fall back to HTTP headers.
if err != nil {
switch resp.StatusCode {
case http.StatusNotFound:
if objectName == "" {
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "NoSuchBucket",
Message: "The specified bucket does not exist.",
BucketName: bucketName,
}
} else {
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "NoSuchKey",
Message: "The specified key does not exist.",
BucketName: bucketName,
Key: objectName,
}
}
case http.StatusForbidden:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "AccessDenied",
Message: "Access Denied.",
BucketName: bucketName,
Key: objectName,
}
case http.StatusConflict:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "Conflict",
Message: "Bucket not empty.",
BucketName: bucketName,
}
case http.StatusPreconditionFailed:
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: "PreconditionFailed",
Message: s3ErrorResponseMap["PreconditionFailed"],
BucketName: bucketName,
Key: objectName,
}
default:
msg := resp.Status
if len(errBody) > 0 {
msg = string(errBody)
if len(msg) > 1024 {
msg = msg[:1024] + "..."
}
}
errResp = ErrorResponse{
StatusCode: resp.StatusCode,
Code: resp.Status,
Message: msg,
BucketName: bucketName,
}
}
}
// Save hostID, requestID and region information
// from headers if not available through error XML.
if errResp.RequestID == "" {
errResp.RequestID = resp.Header.Get("x-amz-request-id")
}
if errResp.HostID == "" {
errResp.HostID = resp.Header.Get("x-amz-id-2")
}
if errResp.Region == "" {
errResp.Region = resp.Header.Get("x-amz-bucket-region")
}
if errResp.Code == "InvalidRegion" && errResp.Region != "" {
errResp.Message = fmt.Sprintf("Region does not match, expecting region %s.", errResp.Region)
}
return errResp
}
// errTransferAccelerationBucket - bucket name is invalid to be used with transfer acceleration.
func errTransferAccelerationBucket(bucketName string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidArgument",
Message: "The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods ..",
BucketName: bucketName,
}
}
// errEntityTooLarge - Input size is larger than supported maximum.
func errEntityTooLarge(totalSize, maxObjectSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Your proposed upload size %d exceeds the maximum allowed object size %d for single PUT operation.", totalSize, maxObjectSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "EntityTooLarge",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// errEntityTooSmall - Input size is smaller than supported minimum.
func errEntityTooSmall(totalSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Your proposed upload size %d is below the minimum allowed object size 0B for single PUT operation.", totalSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "EntityTooSmall",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// errUnexpectedEOF - Unexpected end of file reached.
func errUnexpectedEOF(totalRead, totalSize int64, bucketName, objectName string) error {
msg := fmt.Sprintf("Data read %d is not equal to the size %d of the input Reader.", totalRead, totalSize)
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "UnexpectedEOF",
Message: msg,
BucketName: bucketName,
Key: objectName,
}
}
// errInvalidBucketName - Invalid bucket name response.
func errInvalidBucketName(message string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidBucketName",
Message: message,
RequestID: "minio",
}
}
// errInvalidObjectName - Invalid object name response.
func errInvalidObjectName(message string) error {
return ErrorResponse{
StatusCode: http.StatusNotFound,
Code: "NoSuchKey",
Message: message,
RequestID: "minio",
}
}
// errInvalidArgument - Invalid argument response.
func errInvalidArgument(message string) error {
return ErrorResponse{
StatusCode: http.StatusBadRequest,
Code: "InvalidArgument",
Message: message,
RequestID: "minio",
}
}
// errAPINotSupported - API not supported response
// The specified API call is not supported
func errAPINotSupported(message string) error {
return ErrorResponse{
StatusCode: http.StatusNotImplemented,
Code: "APINotSupported",
Message: message,
RequestID: "minio",
}
}

View File

@ -0,0 +1,152 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2018 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"encoding/xml"
"net/http"
"net/url"
)
// Grantee represents the person being granted permissions.
type Grantee struct {
XMLName xml.Name `xml:"Grantee"`
ID string `xml:"ID"`
DisplayName string `xml:"DisplayName"`
URI string `xml:"URI"`
}
// Grant holds grant information
type Grant struct {
XMLName xml.Name `xml:"Grant"`
Grantee Grantee
Permission string `xml:"Permission"`
}
// AccessControlList contains the set of grantees and the permissions assigned to each grantee.
type AccessControlList struct {
XMLName xml.Name `xml:"AccessControlList"`
Grant []Grant
Permission string `xml:"Permission"`
}
type accessControlPolicy struct {
XMLName xml.Name `xml:"AccessControlPolicy"`
Owner Owner
AccessControlList AccessControlList
}
// GetObjectACL get object ACLs
func (c *Client) GetObjectACL(ctx context.Context, bucketName, objectName string) (*ObjectInfo, error) {
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: url.Values{
"acl": []string{""},
},
})
if err != nil {
return nil, err
}
defer closeResponse(resp)
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, objectName)
}
res := &accessControlPolicy{}
if err := xmlDecoder(resp.Body, res); err != nil {
return nil, err
}
objInfo, err := c.StatObject(ctx, bucketName, objectName, StatObjectOptions{})
if err != nil {
return nil, err
}
objInfo.Owner.DisplayName = res.Owner.DisplayName
objInfo.Owner.ID = res.Owner.ID
objInfo.Grant = append(objInfo.Grant, res.AccessControlList.Grant...)
cannedACL := getCannedACL(res)
if cannedACL != "" {
objInfo.Metadata.Add("X-Amz-Acl", cannedACL)
return &objInfo, nil
}
grantACL := getAmzGrantACL(res)
for k, v := range grantACL {
objInfo.Metadata[k] = v
}
return &objInfo, nil
}
func getCannedACL(aCPolicy *accessControlPolicy) string {
grants := aCPolicy.AccessControlList.Grant
switch {
case len(grants) == 1:
if grants[0].Grantee.URI == "" && grants[0].Permission == "FULL_CONTROL" {
return "private"
}
case len(grants) == 2:
for _, g := range grants {
if g.Grantee.URI == "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" && g.Permission == "READ" {
return "authenticated-read"
}
if g.Grantee.URI == "http://acs.amazonaws.com/groups/global/AllUsers" && g.Permission == "READ" {
return "public-read"
}
if g.Permission == "READ" && g.Grantee.ID == aCPolicy.Owner.ID {
return "bucket-owner-read"
}
}
case len(grants) == 3:
for _, g := range grants {
if g.Grantee.URI == "http://acs.amazonaws.com/groups/global/AllUsers" && g.Permission == "WRITE" {
return "public-read-write"
}
}
}
return ""
}
func getAmzGrantACL(aCPolicy *accessControlPolicy) map[string][]string {
grants := aCPolicy.AccessControlList.Grant
res := map[string][]string{}
for _, g := range grants {
switch {
case g.Permission == "READ":
res["X-Amz-Grant-Read"] = append(res["X-Amz-Grant-Read"], "id="+g.Grantee.ID)
case g.Permission == "WRITE":
res["X-Amz-Grant-Write"] = append(res["X-Amz-Grant-Write"], "id="+g.Grantee.ID)
case g.Permission == "READ_ACP":
res["X-Amz-Grant-Read-Acp"] = append(res["X-Amz-Grant-Read-Acp"], "id="+g.Grantee.ID)
case g.Permission == "WRITE_ACP":
res["X-Amz-Grant-Write-Acp"] = append(res["X-Amz-Grant-Write-Acp"], "id="+g.Grantee.ID)
case g.Permission == "FULL_CONTROL":
res["X-Amz-Grant-Full-Control"] = append(res["X-Amz-Grant-Full-Control"], "id="+g.Grantee.ID)
}
}
return res
}

View File

@ -0,0 +1,127 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"os"
"path/filepath"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// FGetObject - download contents of an object to a local file.
// The options can be used to specify the GET request further.
func (c *Client) FGetObject(ctx context.Context, bucketName, objectName, filePath string, opts GetObjectOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Verify if destination already exists.
st, err := os.Stat(filePath)
if err == nil {
// If the destination exists and is a directory.
if st.IsDir() {
return errInvalidArgument("fileName is a directory.")
}
}
// Proceed if file does not exist. return for all other errors.
if err != nil {
if !os.IsNotExist(err) {
return err
}
}
// Extract top level directory.
objectDir, _ := filepath.Split(filePath)
if objectDir != "" {
// Create any missing top level directories.
if err := os.MkdirAll(objectDir, 0o700); err != nil {
return err
}
}
// Gather md5sum.
objectStat, err := c.StatObject(ctx, bucketName, objectName, StatObjectOptions(opts))
if err != nil {
return err
}
// Write to a temporary file "fileName.part.minio" before saving.
filePartPath := filePath + objectStat.ETag + ".part.minio"
// If exists, open in append mode. If not create it as a part file.
filePart, err := os.OpenFile(filePartPath, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0o600)
if err != nil {
return err
}
// If we return early with an error, be sure to close and delete
// filePart. If we have an error along the way there is a chance
// that filePart is somehow damaged, and we should discard it.
closeAndRemove := true
defer func() {
if closeAndRemove {
_ = filePart.Close()
_ = os.Remove(filePartPath)
}
}()
// Issue Stat to get the current offset.
st, err = filePart.Stat()
if err != nil {
return err
}
// Initialize get object request headers to set the
// appropriate range offsets to read from.
if st.Size() > 0 {
opts.SetRange(st.Size(), 0)
}
// Seek to current position for incoming reader.
objectReader, objectStat, _, err := c.getObject(ctx, bucketName, objectName, opts)
if err != nil {
return err
}
// Write to the part file.
if _, err = io.CopyN(filePart, objectReader, objectStat.Size); err != nil {
return err
}
// Close the file before rename, this is specifically needed for Windows users.
closeAndRemove = false
if err = filePart.Close(); err != nil {
return err
}
// Safely completed. Now commit by renaming to actual filename.
if err = os.Rename(filePartPath, filePath); err != nil {
return err
}
// Return.
return nil
}

695
vendor/github.com/minio/minio-go/v7/api-get-object.go generated vendored Normal file
View File

@ -0,0 +1,695 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"sync"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// GetObject wrapper function that accepts a request context
func (c *Client) GetObject(ctx context.Context, bucketName, objectName string, opts GetObjectOptions) (*Object, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
gctx, cancel := context.WithCancel(ctx)
// Detect if snowball is server location we are talking to.
var snowball bool
if location, ok := c.bucketLocCache.Get(bucketName); ok {
if location == "snowball" {
snowball = true
}
}
var (
err error
httpReader io.ReadCloser
objectInfo ObjectInfo
totalRead int
)
// Create request channel.
reqCh := make(chan getRequest)
// Create response channel.
resCh := make(chan getResponse)
// This routine feeds partial object data as and when the caller reads.
go func() {
defer close(reqCh)
defer close(resCh)
// Used to verify if etag of object has changed since last read.
var etag string
// Loop through the incoming control messages and read data.
for {
select {
// When context is closed exit our routine.
case <-gctx.Done():
// Close the http response body before returning.
// This ends the connection with the server.
if httpReader != nil {
httpReader.Close()
}
return
// Gather incoming request.
case req := <-reqCh:
// If this is the first request we may not need to do a getObject request yet.
if req.isFirstReq {
// First request is a Read/ReadAt.
if req.isReadOp {
// Differentiate between wanting the whole object and just a range.
if req.isReadAt {
// If this is a ReadAt request only get the specified range.
// Range is set with respect to the offset and length of the buffer requested.
// Do not set objectInfo from the first readAt request because it will not get
// the whole object.
opts.SetRange(req.Offset, req.Offset+int64(len(req.Buffer))-1)
} else if req.Offset > 0 {
opts.SetRange(req.Offset, 0)
}
httpReader, objectInfo, _, err = c.getObject(gctx, bucketName, objectName, opts)
if err != nil {
resCh <- getResponse{Error: err}
return
}
etag = objectInfo.ETag
// Read at least firstReq.Buffer bytes, if not we have
// reached our EOF.
size, err := readFull(httpReader, req.Buffer)
totalRead += size
if size > 0 && err == io.ErrUnexpectedEOF {
if int64(size) < objectInfo.Size {
// In situations when returned size
// is less than the expected content
// length set by the server, make sure
// we return io.ErrUnexpectedEOF
err = io.ErrUnexpectedEOF
} else {
// If an EOF happens after reading some but not
// all the bytes ReadFull returns ErrUnexpectedEOF
err = io.EOF
}
} else if size == 0 && err == io.EOF && objectInfo.Size > 0 {
// Special cases when server writes more data
// than the content-length, net/http response
// body returns an error, instead of converting
// it to io.EOF - return unexpected EOF.
err = io.ErrUnexpectedEOF
}
// Send back the first response.
resCh <- getResponse{
objectInfo: objectInfo,
Size: size,
Error: err,
didRead: true,
}
} else {
// First request is a Stat or Seek call.
// Only need to run a StatObject until an actual Read or ReadAt request comes through.
// Remove range header if already set, for stat Operations to get original file size.
delete(opts.headers, "Range")
objectInfo, err = c.StatObject(gctx, bucketName, objectName, StatObjectOptions(opts))
if err != nil {
resCh <- getResponse{
Error: err,
}
// Exit the go-routine.
return
}
etag = objectInfo.ETag
// Send back the first response.
resCh <- getResponse{
objectInfo: objectInfo,
}
}
} else if req.settingObjectInfo { // Request is just to get objectInfo.
// Remove range header if already set, for stat Operations to get original file size.
delete(opts.headers, "Range")
// Check whether this is snowball
// if yes do not use If-Match feature
// it doesn't work.
if etag != "" && !snowball {
opts.SetMatchETag(etag)
}
objectInfo, err := c.StatObject(gctx, bucketName, objectName, StatObjectOptions(opts))
if err != nil {
resCh <- getResponse{
Error: err,
}
// Exit the goroutine.
return
}
// Send back the objectInfo.
resCh <- getResponse{
objectInfo: objectInfo,
}
} else {
// Offset changes fetch the new object at an Offset.
// Because the httpReader may not be set by the first
// request if it was a stat or seek it must be checked
// if the object has been read or not to only initialize
// new ones when they haven't been already.
// All readAt requests are new requests.
if req.DidOffsetChange || !req.beenRead {
// Check whether this is snowball
// if yes do not use If-Match feature
// it doesn't work.
if etag != "" && !snowball {
opts.SetMatchETag(etag)
}
if httpReader != nil {
// Close previously opened http reader.
httpReader.Close()
}
// If this request is a readAt only get the specified range.
if req.isReadAt {
// Range is set with respect to the offset and length of the buffer requested.
opts.SetRange(req.Offset, req.Offset+int64(len(req.Buffer))-1)
} else if req.Offset > 0 { // Range is set with respect to the offset.
opts.SetRange(req.Offset, 0)
} else {
// Remove range header if already set
delete(opts.headers, "Range")
}
httpReader, objectInfo, _, err = c.getObject(gctx, bucketName, objectName, opts)
if err != nil {
resCh <- getResponse{
Error: err,
}
return
}
totalRead = 0
}
// Read at least req.Buffer bytes, if not we have
// reached our EOF.
size, err := readFull(httpReader, req.Buffer)
totalRead += size
if size > 0 && err == io.ErrUnexpectedEOF {
if int64(totalRead) < objectInfo.Size {
// In situations when returned size
// is less than the expected content
// length set by the server, make sure
// we return io.ErrUnexpectedEOF
err = io.ErrUnexpectedEOF
} else {
// If an EOF happens after reading some but not
// all the bytes ReadFull returns ErrUnexpectedEOF
err = io.EOF
}
} else if size == 0 && err == io.EOF && objectInfo.Size > 0 {
// Special cases when server writes more data
// than the content-length, net/http response
// body returns an error, instead of converting
// it to io.EOF - return unexpected EOF.
err = io.ErrUnexpectedEOF
}
// Reply back how much was read.
resCh <- getResponse{
Size: size,
Error: err,
didRead: true,
objectInfo: objectInfo,
}
}
}
}
}()
// Create a newObject through the information sent back by reqCh.
return newObject(gctx, cancel, reqCh, resCh), nil
}
// get request message container to communicate with internal
// go-routine.
type getRequest struct {
Buffer []byte
Offset int64 // readAt offset.
DidOffsetChange bool // Tracks the offset changes for Seek requests.
beenRead bool // Determines if this is the first time an object is being read.
isReadAt bool // Determines if this request is a request to a specific range
isReadOp bool // Determines if this request is a Read or Read/At request.
isFirstReq bool // Determines if this request is the first time an object is being accessed.
settingObjectInfo bool // Determines if this request is to set the objectInfo of an object.
}
// get response message container to reply back for the request.
type getResponse struct {
Size int
Error error
didRead bool // Lets subsequent calls know whether or not httpReader has been initiated.
objectInfo ObjectInfo // Used for the first request.
}
// Object represents an open object. It implements
// Reader, ReaderAt, Seeker, Closer for a HTTP stream.
type Object struct {
// Mutex.
mutex *sync.Mutex
// User allocated and defined.
reqCh chan<- getRequest
resCh <-chan getResponse
ctx context.Context
cancel context.CancelFunc
currOffset int64
objectInfo ObjectInfo
// Ask lower level to initiate data fetching based on currOffset
seekData bool
// Keeps track of closed call.
isClosed bool
// Keeps track of if this is the first call.
isStarted bool
// Previous error saved for future calls.
prevErr error
// Keeps track of if this object has been read yet.
beenRead bool
// Keeps track of if objectInfo has been set yet.
objectInfoSet bool
}
// doGetRequest - sends and blocks on the firstReqCh and reqCh of an object.
// Returns back the size of the buffer read, if anything was read, as well
// as any error encountered. For all first requests sent on the object
// it is also responsible for sending back the objectInfo.
func (o *Object) doGetRequest(request getRequest) (getResponse, error) {
select {
case <-o.ctx.Done():
return getResponse{}, o.ctx.Err()
case o.reqCh <- request:
}
response := <-o.resCh
// Return any error to the top level.
if response.Error != nil {
return response, response.Error
}
// This was the first request.
if !o.isStarted {
// The object has been operated on.
o.isStarted = true
}
// Set the objectInfo if the request was not readAt
// and it hasn't been set before.
if !o.objectInfoSet && !request.isReadAt {
o.objectInfo = response.objectInfo
o.objectInfoSet = true
}
// Set beenRead only if it has not been set before.
if !o.beenRead {
o.beenRead = response.didRead
}
// Data are ready on the wire, no need to reinitiate connection in lower level
o.seekData = false
return response, nil
}
// setOffset - handles the setting of offsets for
// Read/ReadAt/Seek requests.
func (o *Object) setOffset(bytesRead int64) error {
// Update the currentOffset.
o.currOffset += bytesRead
if o.objectInfo.Size > -1 && o.currOffset >= o.objectInfo.Size {
return io.EOF
}
return nil
}
// Read reads up to len(b) bytes into b. It returns the number of
// bytes read (0 <= n <= len(b)) and any error encountered. Returns
// io.EOF upon end of file.
func (o *Object) Read(b []byte) (n int, err error) {
if o == nil {
return 0, errInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// prevErr is previous error saved from previous operation.
if o.prevErr != nil || o.isClosed {
return 0, o.prevErr
}
// Create a new request.
readReq := getRequest{
isReadOp: true,
beenRead: o.beenRead,
Buffer: b,
}
// Alert that this is the first request.
if !o.isStarted {
readReq.isFirstReq = true
}
// Ask to establish a new data fetch routine based on seekData flag
readReq.DidOffsetChange = o.seekData
readReq.Offset = o.currOffset
// Send and receive from the first request.
response, err := o.doGetRequest(readReq)
if err != nil && err != io.EOF {
// Save the error for future calls.
o.prevErr = err
return response.Size, err
}
// Bytes read.
bytesRead := int64(response.Size)
// Set the new offset.
oerr := o.setOffset(bytesRead)
if oerr != nil {
// Save the error for future calls.
o.prevErr = oerr
return response.Size, oerr
}
// Return the response.
return response.Size, err
}
// Stat returns the ObjectInfo structure describing Object.
func (o *Object) Stat() (ObjectInfo, error) {
if o == nil {
return ObjectInfo{}, errInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
if o.prevErr != nil && o.prevErr != io.EOF || o.isClosed {
return ObjectInfo{}, o.prevErr
}
// This is the first request.
if !o.isStarted || !o.objectInfoSet {
// Send the request and get the response.
_, err := o.doGetRequest(getRequest{
isFirstReq: !o.isStarted,
settingObjectInfo: !o.objectInfoSet,
})
if err != nil {
o.prevErr = err
return ObjectInfo{}, err
}
}
return o.objectInfo, nil
}
// ReadAt reads len(b) bytes from the File starting at byte offset
// off. It returns the number of bytes read and the error, if any.
// ReadAt always returns a non-nil error when n < len(b). At end of
// file, that error is io.EOF.
func (o *Object) ReadAt(b []byte, offset int64) (n int, err error) {
if o == nil {
return 0, errInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// prevErr is error which was saved in previous operation.
if o.prevErr != nil && o.prevErr != io.EOF || o.isClosed {
return 0, o.prevErr
}
// Set the current offset to ReadAt offset, because the current offset will be shifted at the end of this method.
o.currOffset = offset
// Can only compare offsets to size when size has been set.
if o.objectInfoSet {
// If offset is negative than we return io.EOF.
// If offset is greater than or equal to object size we return io.EOF.
if (o.objectInfo.Size > -1 && offset >= o.objectInfo.Size) || offset < 0 {
return 0, io.EOF
}
}
// Create the new readAt request.
readAtReq := getRequest{
isReadOp: true,
isReadAt: true,
DidOffsetChange: true, // Offset always changes.
beenRead: o.beenRead, // Set if this is the first request to try and read.
Offset: offset, // Set the offset.
Buffer: b,
}
// Alert that this is the first request.
if !o.isStarted {
readAtReq.isFirstReq = true
}
// Send and receive from the first request.
response, err := o.doGetRequest(readAtReq)
if err != nil && err != io.EOF {
// Save the error.
o.prevErr = err
return response.Size, err
}
// Bytes read.
bytesRead := int64(response.Size)
// There is no valid objectInfo yet
// to compare against for EOF.
if !o.objectInfoSet {
// Update the currentOffset.
o.currOffset += bytesRead
} else {
// If this was not the first request update
// the offsets and compare against objectInfo
// for EOF.
oerr := o.setOffset(bytesRead)
if oerr != nil {
o.prevErr = oerr
return response.Size, oerr
}
}
return response.Size, err
}
// Seek sets the offset for the next Read or Write to offset,
// interpreted according to whence: 0 means relative to the
// origin of the file, 1 means relative to the current offset,
// and 2 means relative to the end.
// Seek returns the new offset and an error, if any.
//
// Seeking to a negative offset is an error. Seeking to any positive
// offset is legal, subsequent io operations succeed until the
// underlying object is not closed.
func (o *Object) Seek(offset int64, whence int) (n int64, err error) {
if o == nil {
return 0, errInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// At EOF seeking is legal allow only io.EOF, for any other errors we return.
if o.prevErr != nil && o.prevErr != io.EOF {
return 0, o.prevErr
}
// Negative offset is valid for whence of '2'.
if offset < 0 && whence != 2 {
return 0, errInvalidArgument(fmt.Sprintf("Negative position not allowed for %d", whence))
}
// This is the first request. So before anything else
// get the ObjectInfo.
if !o.isStarted || !o.objectInfoSet {
// Create the new Seek request.
seekReq := getRequest{
isReadOp: false,
Offset: offset,
isFirstReq: true,
}
// Send and receive from the seek request.
_, err := o.doGetRequest(seekReq)
if err != nil {
// Save the error.
o.prevErr = err
return 0, err
}
}
// Switch through whence.
switch whence {
default:
return 0, errInvalidArgument(fmt.Sprintf("Invalid whence %d", whence))
case 0:
if o.objectInfo.Size > -1 && offset > o.objectInfo.Size {
return 0, io.EOF
}
o.currOffset = offset
case 1:
if o.objectInfo.Size > -1 && o.currOffset+offset > o.objectInfo.Size {
return 0, io.EOF
}
o.currOffset += offset
case 2:
// If we don't know the object size return an error for io.SeekEnd
if o.objectInfo.Size < 0 {
return 0, errInvalidArgument("Whence END is not supported when the object size is unknown")
}
// Seeking to positive offset is valid for whence '2', but
// since we are backing a Reader we have reached 'EOF' if
// offset is positive.
if offset > 0 {
return 0, io.EOF
}
// Seeking to negative position not allowed for whence.
if o.objectInfo.Size+offset < 0 {
return 0, errInvalidArgument(fmt.Sprintf("Seeking at negative offset not allowed for %d", whence))
}
o.currOffset = o.objectInfo.Size + offset
}
// Reset the saved error since we successfully seeked, let the Read
// and ReadAt decide.
if o.prevErr == io.EOF {
o.prevErr = nil
}
// Ask lower level to fetch again from source
o.seekData = true
// Return the effective offset.
return o.currOffset, nil
}
// Close - The behavior of Close after the first call returns error
// for subsequent Close() calls.
func (o *Object) Close() (err error) {
if o == nil {
return errInvalidArgument("Object is nil")
}
// Locking.
o.mutex.Lock()
defer o.mutex.Unlock()
// if already closed return an error.
if o.isClosed {
return o.prevErr
}
// Close successfully.
o.cancel()
// Save for future operations.
errMsg := "Object is already closed. Bad file descriptor."
o.prevErr = errors.New(errMsg)
// Save here that we closed done channel successfully.
o.isClosed = true
return nil
}
// newObject instantiates a new *minio.Object*
// ObjectInfo will be set by setObjectInfo
func newObject(ctx context.Context, cancel context.CancelFunc, reqCh chan<- getRequest, resCh <-chan getResponse) *Object {
return &Object{
ctx: ctx,
cancel: cancel,
mutex: &sync.Mutex{},
reqCh: reqCh,
resCh: resCh,
}
}
// getObject - retrieve object from Object Storage.
//
// Additionally this function also takes range arguments to download the specified
// range bytes of an object. Setting offset and length = 0 will download the full object.
//
// For more information about the HTTP Range header.
// go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
func (c *Client) getObject(ctx context.Context, bucketName, objectName string, opts GetObjectOptions) (io.ReadCloser, ObjectInfo, http.Header, error) {
// Validate input arguments.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, ObjectInfo{}, nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, ObjectInfo{}, nil, err
}
urlValues := make(url.Values)
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
if opts.PartNumber > 0 {
urlValues.Set("partNumber", strconv.Itoa(opts.PartNumber))
}
// Execute GET on objectName.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: opts.Header(),
contentSHA256Hex: emptySHA256Hex,
})
if err != nil {
return nil, ObjectInfo{}, nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
return nil, ObjectInfo{}, nil, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
objectStat, err := ToObjectInfo(bucketName, objectName, resp.Header)
if err != nil {
closeResponse(resp)
return nil, ObjectInfo{}, nil, err
}
// do not close body here, caller will close
return resp.Body, objectStat, resp.Header, nil
}

141
vendor/github.com/minio/minio-go/v7/api-get-options.go generated vendored Normal file
View File

@ -0,0 +1,141 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"fmt"
"net/http"
"time"
"github.com/minio/minio-go/v7/pkg/encrypt"
)
// AdvancedGetOptions for internal use by MinIO server - not intended for client use.
type AdvancedGetOptions struct {
ReplicationDeleteMarker bool
ReplicationProxyRequest string
}
// GetObjectOptions are used to specify additional headers or options
// during GET requests.
type GetObjectOptions struct {
headers map[string]string
ServerSideEncryption encrypt.ServerSide
VersionID string
PartNumber int
// To be not used by external applications
Internal AdvancedGetOptions
}
// StatObjectOptions are used to specify additional headers or options
// during GET info/stat requests.
type StatObjectOptions = GetObjectOptions
// Header returns the http.Header representation of the GET options.
func (o GetObjectOptions) Header() http.Header {
headers := make(http.Header, len(o.headers))
for k, v := range o.headers {
headers.Set(k, v)
}
if o.ServerSideEncryption != nil && o.ServerSideEncryption.Type() == encrypt.SSEC {
o.ServerSideEncryption.Marshal(headers)
}
// this header is set for active-active replication scenario where GET/HEAD
// to site A is proxy'd to site B if object/version missing on site A.
if o.Internal.ReplicationProxyRequest != "" {
headers.Set(minIOBucketReplicationProxyRequest, o.Internal.ReplicationProxyRequest)
}
return headers
}
// Set adds a key value pair to the options. The
// key-value pair will be part of the HTTP GET request
// headers.
func (o *GetObjectOptions) Set(key, value string) {
if o.headers == nil {
o.headers = make(map[string]string)
}
o.headers[http.CanonicalHeaderKey(key)] = value
}
// SetMatchETag - set match etag.
func (o *GetObjectOptions) SetMatchETag(etag string) error {
if etag == "" {
return errInvalidArgument("ETag cannot be empty.")
}
o.Set("If-Match", "\""+etag+"\"")
return nil
}
// SetMatchETagExcept - set match etag except.
func (o *GetObjectOptions) SetMatchETagExcept(etag string) error {
if etag == "" {
return errInvalidArgument("ETag cannot be empty.")
}
o.Set("If-None-Match", "\""+etag+"\"")
return nil
}
// SetUnmodified - set unmodified time since.
func (o *GetObjectOptions) SetUnmodified(modTime time.Time) error {
if modTime.IsZero() {
return errInvalidArgument("Modified since cannot be empty.")
}
o.Set("If-Unmodified-Since", modTime.Format(http.TimeFormat))
return nil
}
// SetModified - set modified time since.
func (o *GetObjectOptions) SetModified(modTime time.Time) error {
if modTime.IsZero() {
return errInvalidArgument("Modified since cannot be empty.")
}
o.Set("If-Modified-Since", modTime.Format(http.TimeFormat))
return nil
}
// SetRange - set the start and end offset of the object to be read.
// See https://tools.ietf.org/html/rfc7233#section-3.1 for reference.
func (o *GetObjectOptions) SetRange(start, end int64) error {
switch {
case start == 0 && end < 0:
// Read last '-end' bytes. `bytes=-N`.
o.Set("Range", fmt.Sprintf("bytes=%d", end))
case 0 < start && end == 0:
// Read everything starting from offset
// 'start'. `bytes=N-`.
o.Set("Range", fmt.Sprintf("bytes=%d-", start))
case 0 <= start && start <= end:
// Read everything starting at 'start' till the
// 'end'. `bytes=N-M`
o.Set("Range", fmt.Sprintf("bytes=%d-%d", start, end))
default:
// All other cases such as
// bytes=-3-
// bytes=5-3
// bytes=-2-4
// bytes=-3-0
// bytes=-3--2
// are invalid.
return errInvalidArgument(
fmt.Sprintf(
"Invalid range specified: start=%d end=%d",
start, end))
}
return nil
}

975
vendor/github.com/minio/minio-go/v7/api-list.go generated vendored Normal file
View File

@ -0,0 +1,975 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"fmt"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// ListBuckets list all buckets owned by this authenticated user.
//
// This call requires explicit authentication, no anonymous requests are
// allowed for listing buckets.
//
// api := client.New(....)
// for message := range api.ListBuckets(context.Background()) {
// fmt.Println(message)
// }
//
func (c *Client) ListBuckets(ctx context.Context) ([]BucketInfo, error) {
// Execute GET on service.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{contentSHA256Hex: emptySHA256Hex})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, "", "")
}
}
listAllMyBucketsResult := listAllMyBucketsResult{}
err = xmlDecoder(resp.Body, &listAllMyBucketsResult)
if err != nil {
return nil, err
}
return listAllMyBucketsResult.Buckets.Bucket, nil
}
// Bucket List Operations.
func (c *Client) listObjectsV2(ctx context.Context, bucketName string, opts ListObjectsOptions) <-chan ObjectInfo {
// Allocate new list objects channel.
objectStatCh := make(chan ObjectInfo, 1)
// Default listing is delimited at "/"
delimiter := "/"
if opts.Recursive {
// If recursive we do not delimit.
delimiter = ""
}
// Return object owner information by default
fetchOwner := true
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(opts.Prefix); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Initiate list objects goroutine here.
go func(objectStatCh chan<- ObjectInfo) {
defer close(objectStatCh)
// Save continuationToken for next request.
var continuationToken string
for {
// Get list of objects a maximum of 1000 per request.
result, err := c.listObjectsV2Query(ctx, bucketName, opts.Prefix, continuationToken,
fetchOwner, opts.WithMetadata, delimiter, opts.StartAfter, opts.MaxKeys, opts.headers)
if err != nil {
objectStatCh <- ObjectInfo{
Err: err,
}
return
}
// If contents are available loop through and send over channel.
for _, object := range result.Contents {
object.ETag = trimEtag(object.ETag)
select {
// Send object content.
case objectStatCh <- object:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
select {
// Send object prefixes.
case objectStatCh <- ObjectInfo{Key: obj.Prefix}:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// If continuation token present, save it for next request.
if result.NextContinuationToken != "" {
continuationToken = result.NextContinuationToken
}
// Listing ends result is not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectStatCh)
return objectStatCh
}
// listObjectsV2Query - (List Objects V2) - List some or all (up to 1000) of the objects in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
// request parameters :-
// ---------
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?continuation-token - Used to continue iterating over a set of objects
// ?metadata - Specifies if we want metadata for the objects as part of list operation.
// ?delimiter - A delimiter is a character you use to group keys.
// ?start-after - Sets a marker to start listing lexically at this key onwards.
// ?max-keys - Sets the maximum number of keys returned in the response body.
func (c *Client) listObjectsV2Query(ctx context.Context, bucketName, objectPrefix, continuationToken string, fetchOwner, metadata bool, delimiter string, startAfter string, maxkeys int, headers http.Header) (ListBucketV2Result, error) {
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ListBucketV2Result{}, err
}
// Validate object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return ListBucketV2Result{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
// Always set list-type in ListObjects V2
urlValues.Set("list-type", "2")
if metadata {
urlValues.Set("metadata", "true")
}
// Set this conditionally if asked
if startAfter != "" {
urlValues.Set("start-after", startAfter)
}
// Always set encoding-type in ListObjects V2
urlValues.Set("encoding-type", "url")
// Set object prefix, prefix value to be set to empty is okay.
urlValues.Set("prefix", objectPrefix)
// Set delimiter, delimiter value to be set to empty is okay.
urlValues.Set("delimiter", delimiter)
// Set continuation token
if continuationToken != "" {
urlValues.Set("continuation-token", continuationToken)
}
// Fetch owner when listing
if fetchOwner {
urlValues.Set("fetch-owner", "true")
}
// Set max keys.
if maxkeys > 0 {
urlValues.Set("max-keys", fmt.Sprintf("%d", maxkeys))
}
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return ListBucketV2Result{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListBucketV2Result{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode listBuckets XML.
listBucketResult := ListBucketV2Result{}
if err = xmlDecoder(resp.Body, &listBucketResult); err != nil {
return listBucketResult, err
}
// This is an additional verification check to make
// sure proper responses are received.
if listBucketResult.IsTruncated && listBucketResult.NextContinuationToken == "" {
return listBucketResult, ErrorResponse{
Code: "NotImplemented",
Message: "Truncated response should have continuation token set",
}
}
for i, obj := range listBucketResult.Contents {
listBucketResult.Contents[i].Key, err = decodeS3Name(obj.Key, listBucketResult.EncodingType)
if err != nil {
return listBucketResult, err
}
listBucketResult.Contents[i].LastModified = listBucketResult.Contents[i].LastModified.Truncate(time.Millisecond)
}
for i, obj := range listBucketResult.CommonPrefixes {
listBucketResult.CommonPrefixes[i].Prefix, err = decodeS3Name(obj.Prefix, listBucketResult.EncodingType)
if err != nil {
return listBucketResult, err
}
}
// Success.
return listBucketResult, nil
}
func (c *Client) listObjects(ctx context.Context, bucketName string, opts ListObjectsOptions) <-chan ObjectInfo {
// Allocate new list objects channel.
objectStatCh := make(chan ObjectInfo, 1)
// Default listing is delimited at "/"
delimiter := "/"
if opts.Recursive {
// If recursive we do not delimit.
delimiter = ""
}
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(opts.Prefix); err != nil {
defer close(objectStatCh)
objectStatCh <- ObjectInfo{
Err: err,
}
return objectStatCh
}
// Initiate list objects goroutine here.
go func(objectStatCh chan<- ObjectInfo) {
defer close(objectStatCh)
marker := opts.StartAfter
for {
// Get list of objects a maximum of 1000 per request.
result, err := c.listObjectsQuery(ctx, bucketName, opts.Prefix, marker, delimiter, opts.MaxKeys, opts.headers)
if err != nil {
objectStatCh <- ObjectInfo{
Err: err,
}
return
}
// If contents are available loop through and send over channel.
for _, object := range result.Contents {
// Save the marker.
marker = object.Key
select {
// Send object content.
case objectStatCh <- object:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
select {
// Send object prefixes.
case objectStatCh <- ObjectInfo{Key: obj.Prefix}:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// If next marker present, save it for next request.
if result.NextMarker != "" {
marker = result.NextMarker
}
// Listing ends result is not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectStatCh)
return objectStatCh
}
func (c *Client) listObjectVersions(ctx context.Context, bucketName string, opts ListObjectsOptions) <-chan ObjectInfo {
// Allocate new list objects channel.
resultCh := make(chan ObjectInfo, 1)
// Default listing is delimited at "/"
delimiter := "/"
if opts.Recursive {
// If recursive we do not delimit.
delimiter = ""
}
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(resultCh)
resultCh <- ObjectInfo{
Err: err,
}
return resultCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(opts.Prefix); err != nil {
defer close(resultCh)
resultCh <- ObjectInfo{
Err: err,
}
return resultCh
}
// Initiate list objects goroutine here.
go func(resultCh chan<- ObjectInfo) {
defer close(resultCh)
var (
keyMarker = ""
versionIDMarker = ""
)
for {
// Get list of objects a maximum of 1000 per request.
result, err := c.listObjectVersionsQuery(ctx, bucketName, opts.Prefix, keyMarker, versionIDMarker, delimiter, opts.MaxKeys, opts.headers)
if err != nil {
resultCh <- ObjectInfo{
Err: err,
}
return
}
// If contents are available loop through and send over channel.
for _, version := range result.Versions {
info := ObjectInfo{
ETag: trimEtag(version.ETag),
Key: version.Key,
LastModified: version.LastModified.Truncate(time.Millisecond),
Size: version.Size,
Owner: version.Owner,
StorageClass: version.StorageClass,
IsLatest: version.IsLatest,
VersionID: version.VersionID,
IsDeleteMarker: version.isDeleteMarker,
}
select {
// Send object version info.
case resultCh <- info:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
select {
// Send object prefixes.
case resultCh <- ObjectInfo{Key: obj.Prefix}:
// If receives done from the caller, return here.
case <-ctx.Done():
return
}
}
// If next key marker is present, save it for next request.
if result.NextKeyMarker != "" {
keyMarker = result.NextKeyMarker
}
// If next version id marker is present, save it for next request.
if result.NextVersionIDMarker != "" {
versionIDMarker = result.NextVersionIDMarker
}
// Listing ends result is not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(resultCh)
return resultCh
}
// listObjectVersions - (List Object Versions) - List some or all (up to 1000) of the existing objects
// and their versions in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
// request parameters :-
// ---------
// ?key-marker - Specifies the key to start with when listing objects in a bucket.
// ?version-id-marker - Specifies the version id marker to start with when listing objects with versions in a bucket.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-keys - Sets the maximum number of keys returned in the response body.
func (c *Client) listObjectVersionsQuery(ctx context.Context, bucketName, prefix, keyMarker, versionIDMarker, delimiter string, maxkeys int, headers http.Header) (ListVersionsResult, error) {
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ListVersionsResult{}, err
}
// Validate object prefix.
if err := s3utils.CheckValidObjectNamePrefix(prefix); err != nil {
return ListVersionsResult{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
// Set versions to trigger versioning API
urlValues.Set("versions", "")
// Set object prefix, prefix value to be set to empty is okay.
urlValues.Set("prefix", prefix)
// Set delimiter, delimiter value to be set to empty is okay.
urlValues.Set("delimiter", delimiter)
// Set object marker.
if keyMarker != "" {
urlValues.Set("key-marker", keyMarker)
}
// Set max keys.
if maxkeys > 0 {
urlValues.Set("max-keys", fmt.Sprintf("%d", maxkeys))
}
// Set version ID marker
if versionIDMarker != "" {
urlValues.Set("version-id-marker", versionIDMarker)
}
// Always set encoding-type
urlValues.Set("encoding-type", "url")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return ListVersionsResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListVersionsResult{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode ListVersionsResult XML.
listObjectVersionsOutput := ListVersionsResult{}
err = xmlDecoder(resp.Body, &listObjectVersionsOutput)
if err != nil {
return ListVersionsResult{}, err
}
for i, obj := range listObjectVersionsOutput.Versions {
listObjectVersionsOutput.Versions[i].Key, err = decodeS3Name(obj.Key, listObjectVersionsOutput.EncodingType)
if err != nil {
return listObjectVersionsOutput, err
}
}
for i, obj := range listObjectVersionsOutput.CommonPrefixes {
listObjectVersionsOutput.CommonPrefixes[i].Prefix, err = decodeS3Name(obj.Prefix, listObjectVersionsOutput.EncodingType)
if err != nil {
return listObjectVersionsOutput, err
}
}
if listObjectVersionsOutput.NextKeyMarker != "" {
listObjectVersionsOutput.NextKeyMarker, err = decodeS3Name(listObjectVersionsOutput.NextKeyMarker, listObjectVersionsOutput.EncodingType)
if err != nil {
return listObjectVersionsOutput, err
}
}
return listObjectVersionsOutput, nil
}
// listObjects - (List Objects) - List some or all (up to 1000) of the objects in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the objects in a bucket.
// request parameters :-
// ---------
// ?marker - Specifies the key to start with when listing objects in a bucket.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-keys - Sets the maximum number of keys returned in the response body.
func (c *Client) listObjectsQuery(ctx context.Context, bucketName, objectPrefix, objectMarker, delimiter string, maxkeys int, headers http.Header) (ListBucketResult, error) {
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ListBucketResult{}, err
}
// Validate object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
return ListBucketResult{}, err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
// Set object prefix, prefix value to be set to empty is okay.
urlValues.Set("prefix", objectPrefix)
// Set delimiter, delimiter value to be set to empty is okay.
urlValues.Set("delimiter", delimiter)
// Set object marker.
if objectMarker != "" {
urlValues.Set("marker", objectMarker)
}
// Set max keys.
if maxkeys > 0 {
urlValues.Set("max-keys", fmt.Sprintf("%d", maxkeys))
}
// Always set encoding-type
urlValues.Set("encoding-type", "url")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return ListBucketResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListBucketResult{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode listBuckets XML.
listBucketResult := ListBucketResult{}
err = xmlDecoder(resp.Body, &listBucketResult)
if err != nil {
return listBucketResult, err
}
for i, obj := range listBucketResult.Contents {
listBucketResult.Contents[i].Key, err = decodeS3Name(obj.Key, listBucketResult.EncodingType)
if err != nil {
return listBucketResult, err
}
listBucketResult.Contents[i].LastModified = listBucketResult.Contents[i].LastModified.Truncate(time.Millisecond)
}
for i, obj := range listBucketResult.CommonPrefixes {
listBucketResult.CommonPrefixes[i].Prefix, err = decodeS3Name(obj.Prefix, listBucketResult.EncodingType)
if err != nil {
return listBucketResult, err
}
}
if listBucketResult.NextMarker != "" {
listBucketResult.NextMarker, err = decodeS3Name(listBucketResult.NextMarker, listBucketResult.EncodingType)
if err != nil {
return listBucketResult, err
}
}
return listBucketResult, nil
}
// ListObjectsOptions holds all options of a list object request
type ListObjectsOptions struct {
// Include objects versions in the listing
WithVersions bool
// Include objects metadata in the listing
WithMetadata bool
// Only list objects with the prefix
Prefix string
// Ignore '/' delimiter
Recursive bool
// The maximum number of objects requested per
// batch, advanced use-case not useful for most
// applications
MaxKeys int
// StartAfter start listing lexically at this
// object onwards, this value can also be set
// for Marker when `UseV1` is set to true.
StartAfter string
// Use the deprecated list objects V1 API
UseV1 bool
headers http.Header
}
// Set adds a key value pair to the options. The
// key-value pair will be part of the HTTP GET request
// headers.
func (o *ListObjectsOptions) Set(key, value string) {
if o.headers == nil {
o.headers = make(http.Header)
}
o.headers.Set(key, value)
}
// ListObjects returns objects list after evaluating the passed options.
//
// api := client.New(....)
// for object := range api.ListObjects(ctx, "mytestbucket", minio.ListObjectsOptions{Prefix: "starthere", Recursive:true}) {
// fmt.Println(object)
// }
//
func (c *Client) ListObjects(ctx context.Context, bucketName string, opts ListObjectsOptions) <-chan ObjectInfo {
if opts.WithVersions {
return c.listObjectVersions(ctx, bucketName, opts)
}
// Use legacy list objects v1 API
if opts.UseV1 {
return c.listObjects(ctx, bucketName, opts)
}
// Check whether this is snowball region, if yes ListObjectsV2 doesn't work, fallback to listObjectsV1.
if location, ok := c.bucketLocCache.Get(bucketName); ok {
if location == "snowball" {
return c.listObjects(ctx, bucketName, opts)
}
}
return c.listObjectsV2(ctx, bucketName, opts)
}
// ListIncompleteUploads - List incompletely uploaded multipart objects.
//
// ListIncompleteUploads lists all incompleted objects matching the
// objectPrefix from the specified bucket. If recursion is enabled
// it would list all subdirectories and all its contents.
//
// Your input parameters are just bucketName, objectPrefix, recursive.
// If you enable recursive as 'true' this function will return back all
// the multipart objects in a given bucket name.
//
// api := client.New(....)
// // Recurively list all objects in 'mytestbucket'
// recursive := true
// for message := range api.ListIncompleteUploads(context.Background(), "mytestbucket", "starthere", recursive) {
// fmt.Println(message)
// }
func (c *Client) ListIncompleteUploads(ctx context.Context, bucketName, objectPrefix string, recursive bool) <-chan ObjectMultipartInfo {
return c.listIncompleteUploads(ctx, bucketName, objectPrefix, recursive)
}
// listIncompleteUploads lists all incomplete uploads.
func (c *Client) listIncompleteUploads(ctx context.Context, bucketName, objectPrefix string, recursive bool) <-chan ObjectMultipartInfo {
// Allocate channel for multipart uploads.
objectMultipartStatCh := make(chan ObjectMultipartInfo, 1)
// Delimiter is set to "/" by default.
delimiter := "/"
if recursive {
// If recursive do not delimit.
delimiter = ""
}
// Validate bucket name.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(objectMultipartStatCh)
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return objectMultipartStatCh
}
// Validate incoming object prefix.
if err := s3utils.CheckValidObjectNamePrefix(objectPrefix); err != nil {
defer close(objectMultipartStatCh)
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return objectMultipartStatCh
}
go func(objectMultipartStatCh chan<- ObjectMultipartInfo) {
defer close(objectMultipartStatCh)
// object and upload ID marker for future requests.
var objectMarker string
var uploadIDMarker string
for {
// list all multipart uploads.
result, err := c.listMultipartUploadsQuery(ctx, bucketName, objectMarker, uploadIDMarker, objectPrefix, delimiter, 0)
if err != nil {
objectMultipartStatCh <- ObjectMultipartInfo{
Err: err,
}
return
}
objectMarker = result.NextKeyMarker
uploadIDMarker = result.NextUploadIDMarker
// Send all multipart uploads.
for _, obj := range result.Uploads {
// Calculate total size of the uploaded parts if 'aggregateSize' is enabled.
select {
// Send individual uploads here.
case objectMultipartStatCh <- obj:
// If the context is canceled
case <-ctx.Done():
return
}
}
// Send all common prefixes if any.
// NOTE: prefixes are only present if the request is delimited.
for _, obj := range result.CommonPrefixes {
select {
// Send delimited prefixes here.
case objectMultipartStatCh <- ObjectMultipartInfo{Key: obj.Prefix, Size: 0}:
// If context is canceled.
case <-ctx.Done():
return
}
}
// Listing ends if result not truncated, return right here.
if !result.IsTruncated {
return
}
}
}(objectMultipartStatCh)
// return.
return objectMultipartStatCh
}
// listMultipartUploadsQuery - (List Multipart Uploads).
// - Lists some or all (up to 1000) in-progress multipart uploads in a bucket.
//
// You can use the request parameters as selection criteria to return a subset of the uploads in a bucket.
// request parameters. :-
// ---------
// ?key-marker - Specifies the multipart upload after which listing should begin.
// ?upload-id-marker - Together with key-marker specifies the multipart upload after which listing should begin.
// ?delimiter - A delimiter is a character you use to group keys.
// ?prefix - Limits the response to keys that begin with the specified prefix.
// ?max-uploads - Sets the maximum number of multipart uploads returned in the response body.
func (c *Client) listMultipartUploadsQuery(ctx context.Context, bucketName, keyMarker, uploadIDMarker, prefix, delimiter string, maxUploads int) (ListMultipartUploadsResult, error) {
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set uploads.
urlValues.Set("uploads", "")
// Set object key marker.
if keyMarker != "" {
urlValues.Set("key-marker", keyMarker)
}
// Set upload id marker.
if uploadIDMarker != "" {
urlValues.Set("upload-id-marker", uploadIDMarker)
}
// Set object prefix, prefix value to be set to empty is okay.
urlValues.Set("prefix", prefix)
// Set delimiter, delimiter value to be set to empty is okay.
urlValues.Set("delimiter", delimiter)
// Always set encoding-type
urlValues.Set("encoding-type", "url")
// maxUploads should be 1000 or less.
if maxUploads > 0 {
// Set max-uploads.
urlValues.Set("max-uploads", fmt.Sprintf("%d", maxUploads))
}
// Execute GET on bucketName to list multipart uploads.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListMultipartUploadsResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListMultipartUploadsResult{}, httpRespToErrorResponse(resp, bucketName, "")
}
}
// Decode response body.
listMultipartUploadsResult := ListMultipartUploadsResult{}
err = xmlDecoder(resp.Body, &listMultipartUploadsResult)
if err != nil {
return listMultipartUploadsResult, err
}
listMultipartUploadsResult.NextKeyMarker, err = decodeS3Name(listMultipartUploadsResult.NextKeyMarker, listMultipartUploadsResult.EncodingType)
if err != nil {
return listMultipartUploadsResult, err
}
listMultipartUploadsResult.NextUploadIDMarker, err = decodeS3Name(listMultipartUploadsResult.NextUploadIDMarker, listMultipartUploadsResult.EncodingType)
if err != nil {
return listMultipartUploadsResult, err
}
for i, obj := range listMultipartUploadsResult.Uploads {
listMultipartUploadsResult.Uploads[i].Key, err = decodeS3Name(obj.Key, listMultipartUploadsResult.EncodingType)
if err != nil {
return listMultipartUploadsResult, err
}
}
for i, obj := range listMultipartUploadsResult.CommonPrefixes {
listMultipartUploadsResult.CommonPrefixes[i].Prefix, err = decodeS3Name(obj.Prefix, listMultipartUploadsResult.EncodingType)
if err != nil {
return listMultipartUploadsResult, err
}
}
return listMultipartUploadsResult, nil
}
// listObjectParts list all object parts recursively.
func (c *Client) listObjectParts(ctx context.Context, bucketName, objectName, uploadID string) (partsInfo map[int]ObjectPart, err error) {
// Part number marker for the next batch of request.
var nextPartNumberMarker int
partsInfo = make(map[int]ObjectPart)
for {
// Get list of uploaded parts a maximum of 1000 per request.
listObjPartsResult, err := c.listObjectPartsQuery(ctx, bucketName, objectName, uploadID, nextPartNumberMarker, 1000)
if err != nil {
return nil, err
}
// Append to parts info.
for _, part := range listObjPartsResult.ObjectParts {
// Trim off the odd double quotes from ETag in the beginning and end.
part.ETag = trimEtag(part.ETag)
partsInfo[part.PartNumber] = part
}
// Keep part number marker, for the next iteration.
nextPartNumberMarker = listObjPartsResult.NextPartNumberMarker
// Listing ends result is not truncated, return right here.
if !listObjPartsResult.IsTruncated {
break
}
}
// Return all the parts.
return partsInfo, nil
}
// findUploadIDs lists all incomplete uploads and find the uploadIDs of the matching object name.
func (c *Client) findUploadIDs(ctx context.Context, bucketName, objectName string) ([]string, error) {
var uploadIDs []string
// Make list incomplete uploads recursive.
isRecursive := true
// List all incomplete uploads.
for mpUpload := range c.listIncompleteUploads(ctx, bucketName, objectName, isRecursive) {
if mpUpload.Err != nil {
return nil, mpUpload.Err
}
if objectName == mpUpload.Key {
uploadIDs = append(uploadIDs, mpUpload.UploadID)
}
}
// Return the latest upload id.
return uploadIDs, nil
}
// listObjectPartsQuery (List Parts query)
// - lists some or all (up to 1000) parts that have been uploaded
// for a specific multipart upload
//
// You can use the request parameters as selection criteria to return
// a subset of the uploads in a bucket, request parameters :-
// ---------
// ?part-number-marker - Specifies the part after which listing should
// begin.
// ?max-parts - Maximum parts to be listed per request.
func (c *Client) listObjectPartsQuery(ctx context.Context, bucketName, objectName, uploadID string, partNumberMarker, maxParts int) (ListObjectPartsResult, error) {
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set part number marker.
urlValues.Set("part-number-marker", fmt.Sprintf("%d", partNumberMarker))
// Set upload id.
urlValues.Set("uploadId", uploadID)
// maxParts should be 1000 or less.
if maxParts > 0 {
// Set max parts.
urlValues.Set("max-parts", fmt.Sprintf("%d", maxParts))
}
// Execute GET on objectName to get list of parts.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return ListObjectPartsResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ListObjectPartsResult{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Decode list object parts XML.
listObjectPartsResult := ListObjectPartsResult{}
err = xmlDecoder(resp.Body, &listObjectPartsResult)
if err != nil {
return listObjectPartsResult, err
}
return listObjectPartsResult, nil
}
// Decode an S3 object name according to the encoding type
func decodeS3Name(name, encodingType string) (string, error) {
switch encodingType {
case "url":
return url.QueryUnescape(name)
default:
return name, nil
}
}

View File

@ -0,0 +1,176 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// objectLegalHold - object legal hold specified in
// https://docs.aws.amazon.com/AmazonS3/latest/API/archive-RESTObjectPUTLegalHold.html
type objectLegalHold struct {
XMLNS string `xml:"xmlns,attr,omitempty"`
XMLName xml.Name `xml:"LegalHold"`
Status LegalHoldStatus `xml:"Status,omitempty"`
}
// PutObjectLegalHoldOptions represents options specified by user for PutObjectLegalHold call
type PutObjectLegalHoldOptions struct {
VersionID string
Status *LegalHoldStatus
}
// GetObjectLegalHoldOptions represents options specified by user for GetObjectLegalHold call
type GetObjectLegalHoldOptions struct {
VersionID string
}
// LegalHoldStatus - object legal hold status.
type LegalHoldStatus string
const (
// LegalHoldEnabled indicates legal hold is enabled
LegalHoldEnabled LegalHoldStatus = "ON"
// LegalHoldDisabled indicates legal hold is disabled
LegalHoldDisabled LegalHoldStatus = "OFF"
)
func (r LegalHoldStatus) String() string {
return string(r)
}
// IsValid - check whether this legal hold status is valid or not.
func (r LegalHoldStatus) IsValid() bool {
return r == LegalHoldEnabled || r == LegalHoldDisabled
}
func newObjectLegalHold(status *LegalHoldStatus) (*objectLegalHold, error) {
if status == nil {
return nil, fmt.Errorf("Status not set")
}
if !status.IsValid() {
return nil, fmt.Errorf("invalid legal hold status `%v`", status)
}
legalHold := &objectLegalHold{
Status: *status,
}
return legalHold, nil
}
// PutObjectLegalHold : sets object legal hold for a given object and versionID.
func (c *Client) PutObjectLegalHold(ctx context.Context, bucketName, objectName string, opts PutObjectLegalHoldOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("legal-hold", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
lh, err := newObjectLegalHold(opts.Status)
if err != nil {
return err
}
lhData, err := xml.Marshal(lh)
if err != nil {
return err
}
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentBody: bytes.NewReader(lhData),
contentLength: int64(len(lhData)),
contentMD5Base64: sumMD5Base64(lhData),
contentSHA256Hex: sum256Hex(lhData),
}
// Execute PUT Object Legal Hold.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return nil
}
// GetObjectLegalHold gets legal-hold status of given object.
func (c *Client) GetObjectLegalHold(ctx context.Context, bucketName, objectName string, opts GetObjectLegalHoldOptions) (status *LegalHoldStatus, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
urlValues := make(url.Values)
urlValues.Set("legal-hold", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
lh := &objectLegalHold{}
if err = xml.NewDecoder(resp.Body).Decode(lh); err != nil {
return nil, err
}
return &lh.Status, nil
}

241
vendor/github.com/minio/minio-go/v7/api-object-lock.go generated vendored Normal file
View File

@ -0,0 +1,241 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2019 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// RetentionMode - object retention mode.
type RetentionMode string
const (
// Governance - governance mode.
Governance RetentionMode = "GOVERNANCE"
// Compliance - compliance mode.
Compliance RetentionMode = "COMPLIANCE"
)
func (r RetentionMode) String() string {
return string(r)
}
// IsValid - check whether this retention mode is valid or not.
func (r RetentionMode) IsValid() bool {
return r == Governance || r == Compliance
}
// ValidityUnit - retention validity unit.
type ValidityUnit string
const (
// Days - denotes no. of days.
Days ValidityUnit = "DAYS"
// Years - denotes no. of years.
Years ValidityUnit = "YEARS"
)
func (unit ValidityUnit) String() string {
return string(unit)
}
// IsValid - check whether this validity unit is valid or not.
func (unit ValidityUnit) isValid() bool {
return unit == Days || unit == Years
}
// Retention - bucket level retention configuration.
type Retention struct {
Mode RetentionMode
Validity time.Duration
}
func (r Retention) String() string {
return fmt.Sprintf("{Mode:%v, Validity:%v}", r.Mode, r.Validity)
}
// IsEmpty - returns whether retention is empty or not.
func (r Retention) IsEmpty() bool {
return r.Mode == "" || r.Validity == 0
}
// objectLockConfig - object lock configuration specified in
// https://docs.aws.amazon.com/AmazonS3/latest/API/Type_API_ObjectLockConfiguration.html
type objectLockConfig struct {
XMLNS string `xml:"xmlns,attr,omitempty"`
XMLName xml.Name `xml:"ObjectLockConfiguration"`
ObjectLockEnabled string `xml:"ObjectLockEnabled"`
Rule *struct {
DefaultRetention struct {
Mode RetentionMode `xml:"Mode"`
Days *uint `xml:"Days"`
Years *uint `xml:"Years"`
} `xml:"DefaultRetention"`
} `xml:"Rule,omitempty"`
}
func newObjectLockConfig(mode *RetentionMode, validity *uint, unit *ValidityUnit) (*objectLockConfig, error) {
config := &objectLockConfig{
ObjectLockEnabled: "Enabled",
}
if mode != nil && validity != nil && unit != nil {
if !mode.IsValid() {
return nil, fmt.Errorf("invalid retention mode `%v`", mode)
}
if !unit.isValid() {
return nil, fmt.Errorf("invalid validity unit `%v`", unit)
}
config.Rule = &struct {
DefaultRetention struct {
Mode RetentionMode `xml:"Mode"`
Days *uint `xml:"Days"`
Years *uint `xml:"Years"`
} `xml:"DefaultRetention"`
}{}
config.Rule.DefaultRetention.Mode = *mode
if *unit == Days {
config.Rule.DefaultRetention.Days = validity
} else {
config.Rule.DefaultRetention.Years = validity
}
return config, nil
}
if mode == nil && validity == nil && unit == nil {
return config, nil
}
return nil, fmt.Errorf("all of retention mode, validity and validity unit must be passed")
}
// SetBucketObjectLockConfig sets object lock configuration in given bucket. mode, validity and unit are either all set or all nil.
func (c *Client) SetBucketObjectLockConfig(ctx context.Context, bucketName string, mode *RetentionMode, validity *uint, unit *ValidityUnit) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("object-lock", "")
config, err := newObjectLockConfig(mode, validity, unit)
if err != nil {
return err
}
configData, err := xml.Marshal(config)
if err != nil {
return err
}
reqMetadata := requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(configData),
contentLength: int64(len(configData)),
contentMD5Base64: sumMD5Base64(configData),
contentSHA256Hex: sum256Hex(configData),
}
// Execute PUT bucket object lock configuration.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
return nil
}
// GetObjectLockConfig gets object lock configuration of given bucket.
func (c *Client) GetObjectLockConfig(ctx context.Context, bucketName string) (objectLock string, mode *RetentionMode, validity *uint, unit *ValidityUnit, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", nil, nil, nil, err
}
urlValues := make(url.Values)
urlValues.Set("object-lock", "")
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return "", nil, nil, nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return "", nil, nil, nil, httpRespToErrorResponse(resp, bucketName, "")
}
}
config := &objectLockConfig{}
if err = xml.NewDecoder(resp.Body).Decode(config); err != nil {
return "", nil, nil, nil, err
}
if config.Rule != nil {
mode = &config.Rule.DefaultRetention.Mode
if config.Rule.DefaultRetention.Days != nil {
validity = config.Rule.DefaultRetention.Days
days := Days
unit = &days
} else {
validity = config.Rule.DefaultRetention.Years
years := Years
unit = &years
}
return config.ObjectLockEnabled, mode, validity, unit, nil
}
return config.ObjectLockEnabled, nil, nil, nil, nil
}
// GetBucketObjectLockConfig gets object lock configuration of given bucket.
func (c *Client) GetBucketObjectLockConfig(ctx context.Context, bucketName string) (mode *RetentionMode, validity *uint, unit *ValidityUnit, err error) {
_, mode, validity, unit, err = c.GetObjectLockConfig(ctx, bucketName)
return mode, validity, unit, err
}
// SetObjectLockConfig sets object lock configuration in given bucket. mode, validity and unit are either all set or all nil.
func (c *Client) SetObjectLockConfig(ctx context.Context, bucketName string, mode *RetentionMode, validity *uint, unit *ValidityUnit) error {
return c.SetBucketObjectLockConfig(ctx, bucketName, mode, validity, unit)
}

View File

@ -0,0 +1,165 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2019-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// objectRetention - object retention specified in
// https://docs.aws.amazon.com/AmazonS3/latest/API/Type_API_ObjectLockConfiguration.html
type objectRetention struct {
XMLNS string `xml:"xmlns,attr,omitempty"`
XMLName xml.Name `xml:"Retention"`
Mode RetentionMode `xml:"Mode,omitempty"`
RetainUntilDate *time.Time `type:"timestamp" timestampFormat:"iso8601" xml:"RetainUntilDate,omitempty"`
}
func newObjectRetention(mode *RetentionMode, date *time.Time) (*objectRetention, error) {
objectRetention := &objectRetention{}
if date != nil && !date.IsZero() {
objectRetention.RetainUntilDate = date
}
if mode != nil {
if !mode.IsValid() {
return nil, fmt.Errorf("invalid retention mode `%v`", mode)
}
objectRetention.Mode = *mode
}
return objectRetention, nil
}
// PutObjectRetentionOptions represents options specified by user for PutObject call
type PutObjectRetentionOptions struct {
GovernanceBypass bool
Mode *RetentionMode
RetainUntilDate *time.Time
VersionID string
}
// PutObjectRetention sets object retention for a given object and versionID.
func (c *Client) PutObjectRetention(ctx context.Context, bucketName, objectName string, opts PutObjectRetentionOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("retention", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
retention, err := newObjectRetention(opts.Mode, opts.RetainUntilDate)
if err != nil {
return err
}
retentionData, err := xml.Marshal(retention)
if err != nil {
return err
}
// Build headers.
headers := make(http.Header)
if opts.GovernanceBypass {
// Set the bypass goverenance retention header
headers.Set(amzBypassGovernance, "true")
}
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentBody: bytes.NewReader(retentionData),
contentLength: int64(len(retentionData)),
contentMD5Base64: sumMD5Base64(retentionData),
contentSHA256Hex: sum256Hex(retentionData),
customHeader: headers,
}
// Execute PUT Object Retention.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return nil
}
// GetObjectRetention gets retention of given object.
func (c *Client) GetObjectRetention(ctx context.Context, bucketName, objectName, versionID string) (mode *RetentionMode, retainUntilDate *time.Time, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, nil, err
}
urlValues := make(url.Values)
urlValues.Set("retention", "")
if versionID != "" {
urlValues.Set("versionId", versionID)
}
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return nil, nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, nil, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
retention := &objectRetention{}
if err = xml.NewDecoder(resp.Body).Decode(retention); err != nil {
return nil, nil, err
}
return &retention.Mode, retention.RetainUntilDate, nil
}

View File

@ -0,0 +1,157 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/tags"
)
// PutObjectTaggingOptions holds an object version id
// to update tag(s) of a specific object version
type PutObjectTaggingOptions struct {
VersionID string
}
// PutObjectTagging replaces or creates object tag(s) and can target
// a specific object version in a versioned bucket.
func (c *Client) PutObjectTagging(ctx context.Context, bucketName, objectName string, otags *tags.Tags, opts PutObjectTaggingOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
reqBytes, err := xml.Marshal(otags)
if err != nil {
return err
}
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentBody: bytes.NewReader(reqBytes),
contentLength: int64(len(reqBytes)),
contentMD5Base64: sumMD5Base64(reqBytes),
}
// Execute PUT to set a object tagging.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return nil
}
// GetObjectTaggingOptions holds the object version ID
// to fetch the tagging key/value pairs
type GetObjectTaggingOptions struct {
VersionID string
}
// GetObjectTagging fetches object tag(s) with options to target
// a specific object version in a versioned bucket.
func (c *Client) GetObjectTagging(ctx context.Context, bucketName, objectName string, opts GetObjectTaggingOptions) (*tags.Tags, error) {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
// Execute GET on object to get object tag(s)
resp, err := c.executeMethod(ctx, http.MethodGet, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return nil, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return tags.ParseObjectXML(resp.Body)
}
// RemoveObjectTaggingOptions holds the version id of the object to remove
type RemoveObjectTaggingOptions struct {
VersionID string
}
// RemoveObjectTagging removes object tag(s) with options to control a specific object
// version in a versioned bucket
func (c *Client) RemoveObjectTagging(ctx context.Context, bucketName, objectName string, opts RemoveObjectTaggingOptions) error {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
urlValues.Set("tagging", "")
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
// Execute DELETE on object to remove object tag(s)
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
// S3 returns "204 No content" after Object tag deletion.
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return err
}

228
vendor/github.com/minio/minio-go/v7/api-presigned.go generated vendored Normal file
View File

@ -0,0 +1,228 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"errors"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/signer"
)
// presignURL - Returns a presigned URL for an input 'method'.
// Expires maximum is 7days - ie. 604800 and minimum is 1.
func (c *Client) presignURL(ctx context.Context, method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values, extraHeaders http.Header) (u *url.URL, err error) {
// Input validation.
if method == "" {
return nil, errInvalidArgument("method cannot be empty.")
}
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err = isValidExpiry(expires); err != nil {
return nil, err
}
// Convert expires into seconds.
expireSeconds := int64(expires / time.Second)
reqMetadata := requestMetadata{
presignURL: true,
bucketName: bucketName,
objectName: objectName,
expires: expireSeconds,
queryValues: reqParams,
extraPresignHeader: extraHeaders,
}
// Instantiate a new request.
// Since expires is set newRequest will presign the request.
var req *http.Request
if req, err = c.newRequest(ctx, method, reqMetadata); err != nil {
return nil, err
}
return req.URL, nil
}
// PresignedGetObject - Returns a presigned URL to access an object
// data without credentials. URL can have a maximum expiry of
// upto 7days or a minimum of 1sec. Additionally you can override
// a set of response headers using the query parameters.
func (c *Client) PresignedGetObject(ctx context.Context, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL(ctx, http.MethodGet, bucketName, objectName, expires, reqParams, nil)
}
// PresignedHeadObject - Returns a presigned URL to access
// object metadata without credentials. URL can have a maximum expiry
// of upto 7days or a minimum of 1sec. Additionally you can override
// a set of response headers using the query parameters.
func (c *Client) PresignedHeadObject(ctx context.Context, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL(ctx, http.MethodHead, bucketName, objectName, expires, reqParams, nil)
}
// PresignedPutObject - Returns a presigned URL to upload an object
// without credentials. URL can have a maximum expiry of upto 7days
// or a minimum of 1sec.
func (c *Client) PresignedPutObject(ctx context.Context, bucketName string, objectName string, expires time.Duration) (u *url.URL, err error) {
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
return c.presignURL(ctx, http.MethodPut, bucketName, objectName, expires, nil, nil)
}
// PresignHeader - similar to Presign() but allows including HTTP headers that
// will be used to build the signature. The request using the resulting URL will
// need to have the exact same headers to be added for signature validation to
// pass.
//
// FIXME: The extra header parameter should be included in Presign() in the next
// major version bump, and this function should then be deprecated.
func (c *Client) PresignHeader(ctx context.Context, method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values, extraHeaders http.Header) (u *url.URL, err error) {
return c.presignURL(ctx, method, bucketName, objectName, expires, reqParams, extraHeaders)
}
// Presign - returns a presigned URL for any http method of your choice along
// with custom request params and extra signed headers. URL can have a maximum
// expiry of upto 7days or a minimum of 1sec.
func (c *Client) Presign(ctx context.Context, method string, bucketName string, objectName string, expires time.Duration, reqParams url.Values) (u *url.URL, err error) {
return c.presignURL(ctx, method, bucketName, objectName, expires, reqParams, nil)
}
// PresignedPostPolicy - Returns POST urlString, form data to upload an object.
func (c *Client) PresignedPostPolicy(ctx context.Context, p *PostPolicy) (u *url.URL, formData map[string]string, err error) {
// Validate input arguments.
if p.expiration.IsZero() {
return nil, nil, errors.New("Expiration time must be specified")
}
if _, ok := p.formData["key"]; !ok {
return nil, nil, errors.New("object key must be specified")
}
if _, ok := p.formData["bucket"]; !ok {
return nil, nil, errors.New("bucket name must be specified")
}
bucketName := p.formData["bucket"]
// Fetch the bucket location.
location, err := c.getBucketLocation(ctx, bucketName)
if err != nil {
return nil, nil, err
}
isVirtualHost := c.isVirtualHostStyleRequest(*c.endpointURL, bucketName)
u, err = c.makeTargetURL(bucketName, "", location, isVirtualHost, nil)
if err != nil {
return nil, nil, err
}
// Get credentials from the configured credentials provider.
credValues, err := c.credsProvider.Get()
if err != nil {
return nil, nil, err
}
var (
signerType = credValues.SignerType
sessionToken = credValues.SessionToken
accessKeyID = credValues.AccessKeyID
secretAccessKey = credValues.SecretAccessKey
)
if signerType.IsAnonymous() {
return nil, nil, errInvalidArgument("Presigned operations are not supported for anonymous credentials")
}
// Keep time.
t := time.Now().UTC()
// For signature version '2' handle here.
if signerType.IsV2() {
policyBase64 := p.base64()
p.formData["policy"] = policyBase64
// For Google endpoint set this value to be 'GoogleAccessId'.
if s3utils.IsGoogleEndpoint(*c.endpointURL) {
p.formData["GoogleAccessId"] = accessKeyID
} else {
// For all other endpoints set this value to be 'AWSAccessKeyId'.
p.formData["AWSAccessKeyId"] = accessKeyID
}
// Sign the policy.
p.formData["signature"] = signer.PostPresignSignatureV2(policyBase64, secretAccessKey)
return u, p.formData, nil
}
// Add date policy.
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-date",
value: t.Format(iso8601DateFormat),
}); err != nil {
return nil, nil, err
}
// Add algorithm policy.
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-algorithm",
value: signV4Algorithm,
}); err != nil {
return nil, nil, err
}
// Add a credential policy.
credential := signer.GetCredential(accessKeyID, location, t, signer.ServiceTypeS3)
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-credential",
value: credential,
}); err != nil {
return nil, nil, err
}
if sessionToken != "" {
if err = p.addNewPolicy(policyCondition{
matchType: "eq",
condition: "$x-amz-security-token",
value: sessionToken,
}); err != nil {
return nil, nil, err
}
}
// Get base64 encoded policy.
policyBase64 := p.base64()
// Fill in the form data.
p.formData["policy"] = policyBase64
p.formData["x-amz-algorithm"] = signV4Algorithm
p.formData["x-amz-credential"] = credential
p.formData["x-amz-date"] = t.Format(iso8601DateFormat)
if sessionToken != "" {
p.formData["x-amz-security-token"] = sessionToken
}
p.formData["x-amz-signature"] = signer.PostPresignSignatureV4(policyBase64, t, secretAccessKey, location)
return u, p.formData, nil
}

123
vendor/github.com/minio/minio-go/v7/api-put-bucket.go generated vendored Normal file
View File

@ -0,0 +1,123 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"net/http"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// Bucket operations
func (c *Client) makeBucket(ctx context.Context, bucketName string, opts MakeBucketOptions) (err error) {
// Validate the input arguments.
if err := s3utils.CheckValidBucketNameStrict(bucketName); err != nil {
return err
}
err = c.doMakeBucket(ctx, bucketName, opts.Region, opts.ObjectLocking)
if err != nil && (opts.Region == "" || opts.Region == "us-east-1") {
if resp, ok := err.(ErrorResponse); ok && resp.Code == "AuthorizationHeaderMalformed" && resp.Region != "" {
err = c.doMakeBucket(ctx, bucketName, resp.Region, opts.ObjectLocking)
}
}
return err
}
func (c *Client) doMakeBucket(ctx context.Context, bucketName string, location string, objectLockEnabled bool) (err error) {
defer func() {
// Save the location into cache on a successful makeBucket response.
if err == nil {
c.bucketLocCache.Set(bucketName, location)
}
}()
// If location is empty, treat is a default region 'us-east-1'.
if location == "" {
location = "us-east-1"
// For custom region clients, default
// to custom region instead not 'us-east-1'.
if c.region != "" {
location = c.region
}
}
// PUT bucket request metadata.
reqMetadata := requestMetadata{
bucketName: bucketName,
bucketLocation: location,
}
if objectLockEnabled {
headers := make(http.Header)
headers.Add("x-amz-bucket-object-lock-enabled", "true")
reqMetadata.customHeader = headers
}
// If location is not 'us-east-1' create bucket location config.
if location != "us-east-1" && location != "" {
createBucketConfig := createBucketConfiguration{}
createBucketConfig.Location = location
var createBucketConfigBytes []byte
createBucketConfigBytes, err = xml.Marshal(createBucketConfig)
if err != nil {
return err
}
reqMetadata.contentMD5Base64 = sumMD5Base64(createBucketConfigBytes)
reqMetadata.contentSHA256Hex = sum256Hex(createBucketConfigBytes)
reqMetadata.contentBody = bytes.NewReader(createBucketConfigBytes)
reqMetadata.contentLength = int64(len(createBucketConfigBytes))
}
// Execute PUT to create a new bucket.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
// Success.
return nil
}
// MakeBucketOptions holds all options to tweak bucket creation
type MakeBucketOptions struct {
// Bucket location
Region string
// Enable object locking
ObjectLocking bool
}
// MakeBucket creates a new bucket with bucketName with a context to control cancellations and timeouts.
//
// Location is an optional argument, by default all buckets are
// created in US Standard Region.
//
// For Amazon S3 for more supported regions - http://docs.aws.amazon.com/general/latest/gr/rande.html
// For Google Cloud Storage for more supported regions - https://cloud.google.com/storage/docs/bucket-locations
func (c *Client) MakeBucket(ctx context.Context, bucketName string, opts MakeBucketOptions) (err error) {
return c.makeBucket(ctx, bucketName, opts)
}

View File

@ -0,0 +1,150 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"io"
"math"
"os"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
const nullVersionID = "null"
// Verify if reader is *minio.Object
func isObject(reader io.Reader) (ok bool) {
_, ok = reader.(*Object)
return
}
// Verify if reader is a generic ReaderAt
func isReadAt(reader io.Reader) (ok bool) {
var v *os.File
v, ok = reader.(*os.File)
if ok {
// Stdin, Stdout and Stderr all have *os.File type
// which happen to also be io.ReaderAt compatible
// we need to add special conditions for them to
// be ignored by this function.
for _, f := range []string{
"/dev/stdin",
"/dev/stdout",
"/dev/stderr",
} {
if f == v.Name() {
ok = false
break
}
}
} else {
_, ok = reader.(io.ReaderAt)
}
return
}
// OptimalPartInfo - calculate the optimal part info for a given
// object size.
//
// NOTE: Assumption here is that for any object to be uploaded to any S3 compatible
// object storage it will have the following parameters as constants.
//
// maxPartsCount - 10000
// minPartSize - 16MiB
// maxMultipartPutObjectSize - 5TiB
//
func OptimalPartInfo(objectSize int64, configuredPartSize uint64) (totalPartsCount int, partSize int64, lastPartSize int64, err error) {
// object size is '-1' set it to 5TiB.
var unknownSize bool
if objectSize == -1 {
unknownSize = true
objectSize = maxMultipartPutObjectSize
}
// object size is larger than supported maximum.
if objectSize > maxMultipartPutObjectSize {
err = errEntityTooLarge(objectSize, maxMultipartPutObjectSize, "", "")
return
}
var partSizeFlt float64
if configuredPartSize > 0 {
if int64(configuredPartSize) > objectSize {
err = errEntityTooLarge(int64(configuredPartSize), objectSize, "", "")
return
}
if !unknownSize {
if objectSize > (int64(configuredPartSize) * maxPartsCount) {
err = errInvalidArgument("Part size * max_parts(10000) is lesser than input objectSize.")
return
}
}
if configuredPartSize < absMinPartSize {
err = errInvalidArgument("Input part size is smaller than allowed minimum of 5MiB.")
return
}
if configuredPartSize > maxPartSize {
err = errInvalidArgument("Input part size is bigger than allowed maximum of 5GiB.")
return
}
partSizeFlt = float64(configuredPartSize)
if unknownSize {
// If input has unknown size and part size is configured
// keep it to maximum allowed as per 10000 parts.
objectSize = int64(configuredPartSize) * maxPartsCount
}
} else {
configuredPartSize = minPartSize
// Use floats for part size for all calculations to avoid
// overflows during float64 to int64 conversions.
partSizeFlt = float64(objectSize / maxPartsCount)
partSizeFlt = math.Ceil(partSizeFlt/float64(configuredPartSize)) * float64(configuredPartSize)
}
// Total parts count.
totalPartsCount = int(math.Ceil(float64(objectSize) / partSizeFlt))
// Part size.
partSize = int64(partSizeFlt)
// Last part size.
lastPartSize = objectSize - int64(totalPartsCount-1)*partSize
return totalPartsCount, partSize, lastPartSize, nil
}
// getUploadID - fetch upload id if already present for an object name
// or initiate a new request to fetch a new upload id.
func (c *Client) newUploadID(ctx context.Context, bucketName, objectName string, opts PutObjectOptions) (uploadID string, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return "", err
}
// Initiate multipart upload for an object.
initMultipartUploadResult, err := c.initiateMultipartUpload(ctx, bucketName, objectName, opts)
if err != nil {
return "", err
}
return initMultipartUploadResult.UploadID, nil
}

View File

@ -0,0 +1,64 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"mime"
"os"
"path/filepath"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// FPutObject - Create an object in a bucket, with contents from file at filePath. Allows request cancellation.
func (c *Client) FPutObject(ctx context.Context, bucketName, objectName, filePath string, opts PutObjectOptions) (info UploadInfo, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Open the referenced file.
fileReader, err := os.Open(filePath)
// If any error fail quickly here.
if err != nil {
return UploadInfo{}, err
}
defer fileReader.Close()
// Save the file stat.
fileStat, err := fileReader.Stat()
if err != nil {
return UploadInfo{}, err
}
// Save the file size.
fileSize := fileStat.Size()
// Set contentType based on filepath extension if not given or default
// value of "application/octet-stream" if the extension has no associated type.
if opts.ContentType == "" {
if opts.ContentType = mime.TypeByExtension(filepath.Ext(filePath)); opts.ContentType == "" {
opts.ContentType = "application/octet-stream"
}
}
return c.PutObject(ctx, bucketName, objectName, fileReader, fileSize, opts)
}

View File

@ -0,0 +1,398 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"sort"
"strconv"
"strings"
"github.com/google/uuid"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
func (c *Client) putObjectMultipart(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64,
opts PutObjectOptions,
) (info UploadInfo, err error) {
info, err = c.putObjectMultipartNoStream(ctx, bucketName, objectName, reader, opts)
if err != nil {
errResp := ToErrorResponse(err)
// Verify if multipart functionality is not available, if not
// fall back to single PutObject operation.
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
// Verify if size of reader is greater than '5GiB'.
if size > maxSinglePutObjectSize {
return UploadInfo{}, errEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
}
// Fall back to uploading as single PutObject operation.
return c.putObject(ctx, bucketName, objectName, reader, size, opts)
}
}
return info, err
}
func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, objectName string, reader io.Reader, opts PutObjectOptions) (info UploadInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Total data read and written to server. should be equal to
// 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, _, err := OptimalPartInfo(-1, opts.PartSize)
if err != nil {
return UploadInfo{}, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return UploadInfo{}, err
}
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Part number always starts with '1'.
partNumber := 1
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Create a buffer.
buf := make([]byte, partSize)
for partNumber <= totalPartsCount {
// Choose hash algorithms to be calculated by hashCopyN,
// avoid sha256 with non-v4 signature request or
// HTTPS connection.
hashAlgos, hashSums := c.hashMaterials(opts.SendContentMd5)
length, rErr := readFull(reader, buf)
if rErr == io.EOF && partNumber > 1 {
break
}
if rErr != nil && rErr != io.ErrUnexpectedEOF && rErr != io.EOF {
return UploadInfo{}, rErr
}
// Calculates hash sums while copying partSize bytes into cw.
for k, v := range hashAlgos {
v.Write(buf[:length])
hashSums[k] = v.Sum(nil)
v.Close()
}
// Update progress reader appropriately to the latest offset
// as we read from the source.
rd := newHook(bytes.NewReader(buf[:length]), opts.Progress)
// Checksums..
var (
md5Base64 string
sha256Hex string
)
if hashSums["md5"] != nil {
md5Base64 = base64.StdEncoding.EncodeToString(hashSums["md5"])
}
if hashSums["sha256"] != nil {
sha256Hex = hex.EncodeToString(hashSums["sha256"])
}
// Proceed to upload the part.
objPart, uerr := c.uploadPart(ctx, bucketName, objectName, uploadID, rd, partNumber,
md5Base64, sha256Hex, int64(length), opts.ServerSideEncryption)
if uerr != nil {
return UploadInfo{}, uerr
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += int64(length)
// Increment part number.
partNumber++
// For unknown size, Read EOF we break away.
// We do not have to upload till totalPartsCount.
if rErr == io.EOF {
break
}
}
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, PutObjectOptions{})
if err != nil {
return UploadInfo{}, err
}
uploadInfo.Size = totalUploadedSize
return uploadInfo, nil
}
// initiateMultipartUpload - Initiates a multipart upload and returns an upload ID.
func (c *Client) initiateMultipartUpload(ctx context.Context, bucketName, objectName string, opts PutObjectOptions) (initiateMultipartUploadResult, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return initiateMultipartUploadResult{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return initiateMultipartUploadResult{}, err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploads", "")
if opts.Internal.SourceVersionID != "" {
if opts.Internal.SourceVersionID != nullVersionID {
if _, err := uuid.Parse(opts.Internal.SourceVersionID); err != nil {
return initiateMultipartUploadResult{}, errInvalidArgument(err.Error())
}
}
urlValues.Set("versionId", opts.Internal.SourceVersionID)
}
// Set ContentType header.
customHeader := opts.Header()
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: customHeader,
}
// Execute POST on an objectName to initiate multipart upload.
resp, err := c.executeMethod(ctx, http.MethodPost, reqMetadata)
defer closeResponse(resp)
if err != nil {
return initiateMultipartUploadResult{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return initiateMultipartUploadResult{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Decode xml for new multipart upload.
initiateMultipartUploadResult := initiateMultipartUploadResult{}
err = xmlDecoder(resp.Body, &initiateMultipartUploadResult)
if err != nil {
return initiateMultipartUploadResult, err
}
return initiateMultipartUploadResult, nil
}
// uploadPart - Uploads a part in a multipart upload.
func (c *Client) uploadPart(ctx context.Context, bucketName, objectName, uploadID string, reader io.Reader,
partNumber int, md5Base64, sha256Hex string, size int64, sse encrypt.ServerSide,
) (ObjectPart, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectPart{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectPart{}, err
}
if size > maxPartSize {
return ObjectPart{}, errEntityTooLarge(size, maxPartSize, bucketName, objectName)
}
if size <= -1 {
return ObjectPart{}, errEntityTooSmall(size, bucketName, objectName)
}
if partNumber <= 0 {
return ObjectPart{}, errInvalidArgument("Part number cannot be negative or equal to zero.")
}
if uploadID == "" {
return ObjectPart{}, errInvalidArgument("UploadID cannot be empty.")
}
// Get resources properly escaped and lined up before using them in http request.
urlValues := make(url.Values)
// Set part number.
urlValues.Set("partNumber", strconv.Itoa(partNumber))
// Set upload id.
urlValues.Set("uploadId", uploadID)
// Set encryption headers, if any.
customHeader := make(http.Header)
// https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html
// Server-side encryption is supported by the S3 Multipart Upload actions.
// Unless you are using a customer-provided encryption key, you don't need
// to specify the encryption parameters in each UploadPart request.
if sse != nil && sse.Type() == encrypt.SSEC {
sse.Marshal(customHeader)
}
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: customHeader,
contentBody: reader,
contentLength: size,
contentMD5Base64: md5Base64,
contentSHA256Hex: sha256Hex,
}
// Execute PUT on each part.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return ObjectPart{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return ObjectPart{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Once successfully uploaded, return completed part.
objPart := ObjectPart{}
objPart.Size = size
objPart.PartNumber = partNumber
// Trim off the odd double quotes from ETag in the beginning and end.
objPart.ETag = trimEtag(resp.Header.Get("ETag"))
return objPart, nil
}
// completeMultipartUpload - Completes a multipart upload by assembling previously uploaded parts.
func (c *Client) completeMultipartUpload(ctx context.Context, bucketName, objectName, uploadID string,
complete completeMultipartUpload, opts PutObjectOptions,
) (UploadInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploadId", uploadID)
// Marshal complete multipart body.
completeMultipartUploadBytes, err := xml.Marshal(complete)
if err != nil {
return UploadInfo{}, err
}
// Instantiate all the complete multipart buffer.
completeMultipartUploadBuffer := bytes.NewReader(completeMultipartUploadBytes)
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentBody: completeMultipartUploadBuffer,
contentLength: int64(len(completeMultipartUploadBytes)),
contentSHA256Hex: sum256Hex(completeMultipartUploadBytes),
customHeader: opts.Header(),
}
// Execute POST to complete multipart upload for an objectName.
resp, err := c.executeMethod(ctx, http.MethodPost, reqMetadata)
defer closeResponse(resp)
if err != nil {
return UploadInfo{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return UploadInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// Read resp.Body into a []bytes to parse for Error response inside the body
var b []byte
b, err = ioutil.ReadAll(resp.Body)
if err != nil {
return UploadInfo{}, err
}
// Decode completed multipart upload response on success.
completeMultipartUploadResult := completeMultipartUploadResult{}
err = xmlDecoder(bytes.NewReader(b), &completeMultipartUploadResult)
if err != nil {
// xml parsing failure due to presence an ill-formed xml fragment
return UploadInfo{}, err
} else if completeMultipartUploadResult.Bucket == "" {
// xml's Decode method ignores well-formed xml that don't apply to the type of value supplied.
// In this case, it would leave completeMultipartUploadResult with the corresponding zero-values
// of the members.
// Decode completed multipart upload response on failure
completeMultipartUploadErr := ErrorResponse{}
err = xmlDecoder(bytes.NewReader(b), &completeMultipartUploadErr)
if err != nil {
// xml parsing failure due to presence an ill-formed xml fragment
return UploadInfo{}, err
}
return UploadInfo{}, completeMultipartUploadErr
}
// extract lifecycle expiry date and rule ID
expTime, ruleID := amzExpirationToExpiryDateRuleID(resp.Header.Get(amzExpiration))
return UploadInfo{
Bucket: completeMultipartUploadResult.Bucket,
Key: completeMultipartUploadResult.Key,
ETag: trimEtag(completeMultipartUploadResult.ETag),
VersionID: resp.Header.Get(amzVersionID),
Location: completeMultipartUploadResult.Location,
Expiration: expTime,
ExpirationRuleID: ruleID,
}, nil
}

View File

@ -0,0 +1,491 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"net/url"
"sort"
"strings"
"github.com/google/uuid"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// putObjectMultipartStream - upload a large object using
// multipart upload and streaming signature for signing payload.
// Comprehensive put object operation involving multipart uploads.
//
// Following code handles these types of readers.
//
// - *minio.Object
// - Any reader which has a method 'ReadAt()'
//
func (c *Client) putObjectMultipartStream(ctx context.Context, bucketName, objectName string,
reader io.Reader, size int64, opts PutObjectOptions,
) (info UploadInfo, err error) {
if !isObject(reader) && isReadAt(reader) && !opts.SendContentMd5 {
// Verify if the reader implements ReadAt and it is not a *minio.Object then we will use parallel uploader.
info, err = c.putObjectMultipartStreamFromReadAt(ctx, bucketName, objectName, reader.(io.ReaderAt), size, opts)
} else {
info, err = c.putObjectMultipartStreamOptionalChecksum(ctx, bucketName, objectName, reader, size, opts)
}
if err != nil {
errResp := ToErrorResponse(err)
// Verify if multipart functionality is not available, if not
// fall back to single PutObject operation.
if errResp.Code == "AccessDenied" && strings.Contains(errResp.Message, "Access Denied") {
// Verify if size of reader is greater than '5GiB'.
if size > maxSinglePutObjectSize {
return UploadInfo{}, errEntityTooLarge(size, maxSinglePutObjectSize, bucketName, objectName)
}
// Fall back to uploading as single PutObject operation.
return c.putObject(ctx, bucketName, objectName, reader, size, opts)
}
}
return info, err
}
// uploadedPartRes - the response received from a part upload.
type uploadedPartRes struct {
Error error // Any error encountered while uploading the part.
PartNum int // Number of the part uploaded.
Size int64 // Size of the part uploaded.
Part ObjectPart
}
type uploadPartReq struct {
PartNum int // Number of the part uploaded.
Part ObjectPart // Size of the part uploaded.
}
// putObjectMultipartFromReadAt - Uploads files bigger than 128MiB.
// Supports all readers which implements io.ReaderAt interface
// (ReadAt method).
//
// NOTE: This function is meant to be used for all readers which
// implement io.ReaderAt which allows us for resuming multipart
// uploads but reading at an offset, which would avoid re-read the
// data which was already uploaded. Internally this function uses
// temporary files for staging all the data, these temporary files are
// cleaned automatically when the caller i.e http client closes the
// stream after uploading all the contents successfully.
func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketName, objectName string,
reader io.ReaderAt, size int64, opts PutObjectOptions,
) (info UploadInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, lastPartSize, err := OptimalPartInfo(size, opts.PartSize)
if err != nil {
return UploadInfo{}, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return UploadInfo{}, err
}
// Aborts the multipart upload in progress, if the
// function returns any error, since we do not resume
// we should purge the parts which have been uploaded
// to relinquish storage space.
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Total data read and written to server. should be equal to 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Declare a channel that sends the next part number to be uploaded.
// Buffered to 10000 because thats the maximum number of parts allowed
// by S3.
uploadPartsCh := make(chan uploadPartReq, 10000)
// Declare a channel that sends back the response of a part upload.
// Buffered to 10000 because thats the maximum number of parts allowed
// by S3.
uploadedPartsCh := make(chan uploadedPartRes, 10000)
// Used for readability, lastPartNumber is always totalPartsCount.
lastPartNumber := totalPartsCount
// Send each part number to the channel to be processed.
for p := 1; p <= totalPartsCount; p++ {
uploadPartsCh <- uploadPartReq{PartNum: p}
}
close(uploadPartsCh)
partsBuf := make([][]byte, opts.getNumThreads())
for i := range partsBuf {
partsBuf[i] = make([]byte, 0, partSize)
}
// Receive each part number from the channel allowing three parallel uploads.
for w := 1; w <= opts.getNumThreads(); w++ {
go func(w int, partSize int64) {
// Each worker will draw from the part channel and upload in parallel.
for uploadReq := range uploadPartsCh {
// If partNumber was not uploaded we calculate the missing
// part offset and size. For all other part numbers we
// calculate offset based on multiples of partSize.
readOffset := int64(uploadReq.PartNum-1) * partSize
// As a special case if partNumber is lastPartNumber, we
// calculate the offset based on the last part size.
if uploadReq.PartNum == lastPartNumber {
readOffset = (size - lastPartSize)
partSize = lastPartSize
}
n, rerr := readFull(io.NewSectionReader(reader, readOffset, partSize), partsBuf[w-1][:partSize])
if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF {
uploadedPartsCh <- uploadedPartRes{
Error: rerr,
}
// Exit the goroutine.
return
}
// Get a section reader on a particular offset.
hookReader := newHook(bytes.NewReader(partsBuf[w-1][:n]), opts.Progress)
// Proceed to upload the part.
objPart, err := c.uploadPart(ctx, bucketName, objectName,
uploadID, hookReader, uploadReq.PartNum,
"", "", partSize, opts.ServerSideEncryption)
if err != nil {
uploadedPartsCh <- uploadedPartRes{
Error: err,
}
// Exit the goroutine.
return
}
// Save successfully uploaded part metadata.
uploadReq.Part = objPart
// Send successful part info through the channel.
uploadedPartsCh <- uploadedPartRes{
Size: objPart.Size,
PartNum: uploadReq.PartNum,
Part: uploadReq.Part,
}
}
}(w, partSize)
}
// Gather the responses as they occur and update any
// progress bar.
for u := 1; u <= totalPartsCount; u++ {
uploadRes := <-uploadedPartsCh
if uploadRes.Error != nil {
return UploadInfo{}, uploadRes.Error
}
// Update the totalUploadedSize.
totalUploadedSize += uploadRes.Size
// Store the parts to be completed in order.
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: uploadRes.Part.ETag,
PartNumber: uploadRes.Part.PartNumber,
})
}
// Verify if we uploaded all the data.
if totalUploadedSize != size {
return UploadInfo{}, errUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, PutObjectOptions{})
if err != nil {
return UploadInfo{}, err
}
uploadInfo.Size = totalUploadedSize
return uploadInfo, nil
}
func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, bucketName, objectName string,
reader io.Reader, size int64, opts PutObjectOptions,
) (info UploadInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, lastPartSize, err := OptimalPartInfo(size, opts.PartSize)
if err != nil {
return UploadInfo{}, err
}
// Initiates a new multipart request
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return UploadInfo{}, err
}
// Aborts the multipart upload if the function returns
// any error, since we do not resume we should purge
// the parts which have been uploaded to relinquish
// storage space.
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Total data read and written to server. should be equal to 'size' at the end of the call.
var totalUploadedSize int64
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Create a buffer.
buf := make([]byte, partSize)
// Avoid declaring variables in the for loop
var md5Base64 string
var hookReader io.Reader
// Part number always starts with '1'.
var partNumber int
for partNumber = 1; partNumber <= totalPartsCount; partNumber++ {
// Proceed to upload the part.
if partNumber == totalPartsCount {
partSize = lastPartSize
}
if opts.SendContentMd5 {
length, rerr := readFull(reader, buf)
if rerr == io.EOF && partNumber > 1 {
break
}
if rerr != nil && rerr != io.ErrUnexpectedEOF && err != io.EOF {
return UploadInfo{}, rerr
}
// Calculate md5sum.
hash := c.md5Hasher()
hash.Write(buf[:length])
md5Base64 = base64.StdEncoding.EncodeToString(hash.Sum(nil))
hash.Close()
// Update progress reader appropriately to the latest offset
// as we read from the source.
hookReader = newHook(bytes.NewReader(buf[:length]), opts.Progress)
} else {
// Update progress reader appropriately to the latest offset
// as we read from the source.
hookReader = newHook(reader, opts.Progress)
}
objPart, uerr := c.uploadPart(ctx, bucketName, objectName, uploadID,
io.LimitReader(hookReader, partSize),
partNumber, md5Base64, "", partSize, opts.ServerSideEncryption)
if uerr != nil {
return UploadInfo{}, uerr
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += partSize
}
// Verify if we uploaded all the data.
if size > 0 {
if totalUploadedSize != size {
return UploadInfo{}, errUnexpectedEOF(totalUploadedSize, size, bucketName, objectName)
}
}
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, PutObjectOptions{})
if err != nil {
return UploadInfo{}, err
}
uploadInfo.Size = totalUploadedSize
return uploadInfo, nil
}
// putObject special function used Google Cloud Storage. This special function
// is used for Google Cloud Storage since Google's multipart API is not S3 compatible.
func (c *Client) putObject(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64, opts PutObjectOptions) (info UploadInfo, err error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Size -1 is only supported on Google Cloud Storage, we error
// out in all other situations.
if size < 0 && !s3utils.IsGoogleEndpoint(*c.endpointURL) {
return UploadInfo{}, errEntityTooSmall(size, bucketName, objectName)
}
if opts.SendContentMd5 && s3utils.IsGoogleEndpoint(*c.endpointURL) && size < 0 {
return UploadInfo{}, errInvalidArgument("MD5Sum cannot be calculated with size '-1'")
}
if size > 0 {
if isReadAt(reader) && !isObject(reader) {
seeker, ok := reader.(io.Seeker)
if ok {
offset, err := seeker.Seek(0, io.SeekCurrent)
if err != nil {
return UploadInfo{}, errInvalidArgument(err.Error())
}
reader = io.NewSectionReader(reader.(io.ReaderAt), offset, size)
}
}
}
var md5Base64 string
if opts.SendContentMd5 {
// Create a buffer.
buf := make([]byte, size)
length, rErr := readFull(reader, buf)
if rErr != nil && rErr != io.ErrUnexpectedEOF && rErr != io.EOF {
return UploadInfo{}, rErr
}
// Calculate md5sum.
hash := c.md5Hasher()
hash.Write(buf[:length])
md5Base64 = base64.StdEncoding.EncodeToString(hash.Sum(nil))
reader = bytes.NewReader(buf[:length])
hash.Close()
}
// Update progress reader appropriately to the latest offset as we
// read from the source.
readSeeker := newHook(reader, opts.Progress)
// This function does not calculate sha256 and md5sum for payload.
// Execute put object.
return c.putObjectDo(ctx, bucketName, objectName, readSeeker, md5Base64, "", size, opts)
}
// putObjectDo - executes the put object http operation.
// NOTE: You must have WRITE permissions on a bucket to add an object to it.
func (c *Client) putObjectDo(ctx context.Context, bucketName, objectName string, reader io.Reader, md5Base64, sha256Hex string, size int64, opts PutObjectOptions) (UploadInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Set headers.
customHeader := opts.Header()
// Populate request metadata.
reqMetadata := requestMetadata{
bucketName: bucketName,
objectName: objectName,
customHeader: customHeader,
contentBody: reader,
contentLength: size,
contentMD5Base64: md5Base64,
contentSHA256Hex: sha256Hex,
}
if opts.Internal.SourceVersionID != "" {
if opts.Internal.SourceVersionID != nullVersionID {
if _, err := uuid.Parse(opts.Internal.SourceVersionID); err != nil {
return UploadInfo{}, errInvalidArgument(err.Error())
}
}
urlValues := make(url.Values)
urlValues.Set("versionId", opts.Internal.SourceVersionID)
reqMetadata.queryValues = urlValues
}
// Execute PUT an objectName.
resp, err := c.executeMethod(ctx, http.MethodPut, reqMetadata)
defer closeResponse(resp)
if err != nil {
return UploadInfo{}, err
}
if resp != nil {
if resp.StatusCode != http.StatusOK {
return UploadInfo{}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
// extract lifecycle expiry date and rule ID
expTime, ruleID := amzExpirationToExpiryDateRuleID(resp.Header.Get(amzExpiration))
return UploadInfo{
Bucket: bucketName,
Key: objectName,
ETag: trimEtag(resp.Header.Get("ETag")),
VersionID: resp.Header.Get(amzVersionID),
Size: size,
Expiration: expTime,
ExpirationRuleID: ruleID,
}, nil
}

391
vendor/github.com/minio/minio-go/v7/api-put-object.go generated vendored Normal file
View File

@ -0,0 +1,391 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"sort"
"time"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/s3utils"
"golang.org/x/net/http/httpguts"
)
// ReplicationStatus represents replication status of object
type ReplicationStatus string
const (
// ReplicationStatusPending indicates replication is pending
ReplicationStatusPending ReplicationStatus = "PENDING"
// ReplicationStatusComplete indicates replication completed ok
ReplicationStatusComplete ReplicationStatus = "COMPLETED"
// ReplicationStatusFailed indicates replication failed
ReplicationStatusFailed ReplicationStatus = "FAILED"
// ReplicationStatusReplica indicates object is a replica of a source
ReplicationStatusReplica ReplicationStatus = "REPLICA"
)
// Empty returns true if no replication status set.
func (r ReplicationStatus) Empty() bool {
return r == ""
}
// AdvancedPutOptions for internal use - to be utilized by replication, ILM transition
// implementation on MinIO server
type AdvancedPutOptions struct {
SourceVersionID string
SourceETag string
ReplicationStatus ReplicationStatus
SourceMTime time.Time
ReplicationRequest bool
RetentionTimestamp time.Time
TaggingTimestamp time.Time
LegalholdTimestamp time.Time
}
// PutObjectOptions represents options specified by user for PutObject call
type PutObjectOptions struct {
UserMetadata map[string]string
UserTags map[string]string
Progress io.Reader
ContentType string
ContentEncoding string
ContentDisposition string
ContentLanguage string
CacheControl string
Mode RetentionMode
RetainUntilDate time.Time
ServerSideEncryption encrypt.ServerSide
NumThreads uint
StorageClass string
WebsiteRedirectLocation string
PartSize uint64
LegalHold LegalHoldStatus
SendContentMd5 bool
DisableMultipart bool
Internal AdvancedPutOptions
}
// getNumThreads - gets the number of threads to be used in the multipart
// put object operation
func (opts PutObjectOptions) getNumThreads() (numThreads int) {
if opts.NumThreads > 0 {
numThreads = int(opts.NumThreads)
} else {
numThreads = totalWorkers
}
return
}
// Header - constructs the headers from metadata entered by user in
// PutObjectOptions struct
func (opts PutObjectOptions) Header() (header http.Header) {
header = make(http.Header)
contentType := opts.ContentType
if contentType == "" {
contentType = "application/octet-stream"
}
header.Set("Content-Type", contentType)
if opts.ContentEncoding != "" {
header.Set("Content-Encoding", opts.ContentEncoding)
}
if opts.ContentDisposition != "" {
header.Set("Content-Disposition", opts.ContentDisposition)
}
if opts.ContentLanguage != "" {
header.Set("Content-Language", opts.ContentLanguage)
}
if opts.CacheControl != "" {
header.Set("Cache-Control", opts.CacheControl)
}
if opts.Mode != "" {
header.Set(amzLockMode, opts.Mode.String())
}
if !opts.RetainUntilDate.IsZero() {
header.Set("X-Amz-Object-Lock-Retain-Until-Date", opts.RetainUntilDate.Format(time.RFC3339))
}
if opts.LegalHold != "" {
header.Set(amzLegalHoldHeader, opts.LegalHold.String())
}
if opts.ServerSideEncryption != nil {
opts.ServerSideEncryption.Marshal(header)
}
if opts.StorageClass != "" {
header.Set(amzStorageClass, opts.StorageClass)
}
if opts.WebsiteRedirectLocation != "" {
header.Set(amzWebsiteRedirectLocation, opts.WebsiteRedirectLocation)
}
if !opts.Internal.ReplicationStatus.Empty() {
header.Set(amzBucketReplicationStatus, string(opts.Internal.ReplicationStatus))
}
if !opts.Internal.SourceMTime.IsZero() {
header.Set(minIOBucketSourceMTime, opts.Internal.SourceMTime.Format(time.RFC3339Nano))
}
if opts.Internal.SourceETag != "" {
header.Set(minIOBucketSourceETag, opts.Internal.SourceETag)
}
if opts.Internal.ReplicationRequest {
header.Set(minIOBucketReplicationRequest, "")
}
if !opts.Internal.LegalholdTimestamp.IsZero() {
header.Set(minIOBucketReplicationObjectLegalHoldTimestamp, opts.Internal.LegalholdTimestamp.Format(time.RFC3339Nano))
}
if !opts.Internal.RetentionTimestamp.IsZero() {
header.Set(minIOBucketReplicationObjectRetentionTimestamp, opts.Internal.RetentionTimestamp.Format(time.RFC3339Nano))
}
if !opts.Internal.TaggingTimestamp.IsZero() {
header.Set(minIOBucketReplicationTaggingTimestamp, opts.Internal.TaggingTimestamp.Format(time.RFC3339Nano))
}
if len(opts.UserTags) != 0 {
header.Set(amzTaggingHeader, s3utils.TagEncode(opts.UserTags))
}
for k, v := range opts.UserMetadata {
if isAmzHeader(k) || isStandardHeader(k) || isStorageClassHeader(k) {
header.Set(k, v)
} else {
header.Set("x-amz-meta-"+k, v)
}
}
return
}
// validate() checks if the UserMetadata map has standard headers or and raises an error if so.
func (opts PutObjectOptions) validate() (err error) {
for k, v := range opts.UserMetadata {
if !httpguts.ValidHeaderFieldName(k) || isStandardHeader(k) || isSSEHeader(k) || isStorageClassHeader(k) {
return errInvalidArgument(k + " unsupported user defined metadata name")
}
if !httpguts.ValidHeaderFieldValue(v) {
return errInvalidArgument(v + " unsupported user defined metadata value")
}
}
if opts.Mode != "" && !opts.Mode.IsValid() {
return errInvalidArgument(opts.Mode.String() + " unsupported retention mode")
}
if opts.LegalHold != "" && !opts.LegalHold.IsValid() {
return errInvalidArgument(opts.LegalHold.String() + " unsupported legal-hold status")
}
return nil
}
// completedParts is a collection of parts sortable by their part numbers.
// used for sorting the uploaded parts before completing the multipart request.
type completedParts []CompletePart
func (a completedParts) Len() int { return len(a) }
func (a completedParts) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a completedParts) Less(i, j int) bool { return a[i].PartNumber < a[j].PartNumber }
// PutObject creates an object in a bucket.
//
// You must have WRITE permissions on a bucket to create an object.
//
// - For size smaller than 16MiB PutObject automatically does a
// single atomic PUT operation.
//
// - For size larger than 16MiB PutObject automatically does a
// multipart upload operation.
//
// - For size input as -1 PutObject does a multipart Put operation
// until input stream reaches EOF. Maximum object size that can
// be uploaded through this operation will be 5TiB.
//
// WARNING: Passing down '-1' will use memory and these cannot
// be reused for best outcomes for PutObject(), pass the size always.
//
// NOTE: Upon errors during upload multipart operation is entirely aborted.
func (c *Client) PutObject(ctx context.Context, bucketName, objectName string, reader io.Reader, objectSize int64,
opts PutObjectOptions,
) (info UploadInfo, err error) {
if objectSize < 0 && opts.DisableMultipart {
return UploadInfo{}, errors.New("object size must be provided with disable multipart upload")
}
err = opts.validate()
if err != nil {
return UploadInfo{}, err
}
return c.putObjectCommon(ctx, bucketName, objectName, reader, objectSize, opts)
}
func (c *Client) putObjectCommon(ctx context.Context, bucketName, objectName string, reader io.Reader, size int64, opts PutObjectOptions) (info UploadInfo, err error) {
// Check for largest object size allowed.
if size > int64(maxMultipartPutObjectSize) {
return UploadInfo{}, errEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
}
// NOTE: Streaming signature is not supported by GCS.
if s3utils.IsGoogleEndpoint(*c.endpointURL) {
return c.putObject(ctx, bucketName, objectName, reader, size, opts)
}
partSize := opts.PartSize
if opts.PartSize == 0 {
partSize = minPartSize
}
if c.overrideSignerType.IsV2() {
if size >= 0 && size < int64(partSize) || opts.DisableMultipart {
return c.putObject(ctx, bucketName, objectName, reader, size, opts)
}
return c.putObjectMultipart(ctx, bucketName, objectName, reader, size, opts)
}
if size < 0 {
return c.putObjectMultipartStreamNoLength(ctx, bucketName, objectName, reader, opts)
}
if size < int64(partSize) || opts.DisableMultipart {
return c.putObject(ctx, bucketName, objectName, reader, size, opts)
}
return c.putObjectMultipartStream(ctx, bucketName, objectName, reader, size, opts)
}
func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketName, objectName string, reader io.Reader, opts PutObjectOptions) (info UploadInfo, err error) {
// Input validation.
if err = s3utils.CheckValidBucketName(bucketName); err != nil {
return UploadInfo{}, err
}
if err = s3utils.CheckValidObjectName(objectName); err != nil {
return UploadInfo{}, err
}
// Total data read and written to server. should be equal to
// 'size' at the end of the call.
var totalUploadedSize int64
// Complete multipart upload.
var complMultipartUpload completeMultipartUpload
// Calculate the optimal parts info for a given size.
totalPartsCount, partSize, _, err := OptimalPartInfo(-1, opts.PartSize)
if err != nil {
return UploadInfo{}, err
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
if err != nil {
return UploadInfo{}, err
}
defer func() {
if err != nil {
c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
}
}()
// Part number always starts with '1'.
partNumber := 1
// Initialize parts uploaded map.
partsInfo := make(map[int]ObjectPart)
// Create a buffer.
buf := make([]byte, partSize)
for partNumber <= totalPartsCount {
length, rerr := readFull(reader, buf)
if rerr == io.EOF && partNumber > 1 {
break
}
if rerr != nil && rerr != io.ErrUnexpectedEOF && rerr != io.EOF {
return UploadInfo{}, rerr
}
var md5Base64 string
if opts.SendContentMd5 {
// Calculate md5sum.
hash := c.md5Hasher()
hash.Write(buf[:length])
md5Base64 = base64.StdEncoding.EncodeToString(hash.Sum(nil))
hash.Close()
}
// Update progress reader appropriately to the latest offset
// as we read from the source.
rd := newHook(bytes.NewReader(buf[:length]), opts.Progress)
// Proceed to upload the part.
objPart, uerr := c.uploadPart(ctx, bucketName, objectName, uploadID, rd, partNumber,
md5Base64, "", int64(length), opts.ServerSideEncryption)
if uerr != nil {
return UploadInfo{}, uerr
}
// Save successfully uploaded part metadata.
partsInfo[partNumber] = objPart
// Save successfully uploaded size.
totalUploadedSize += int64(length)
// Increment part number.
partNumber++
// For unknown size, Read EOF we break away.
// We do not have to upload till totalPartsCount.
if rerr == io.EOF {
break
}
}
// Loop over total uploaded parts to save them in
// Parts array before completing the multipart request.
for i := 1; i < partNumber; i++ {
part, ok := partsInfo[i]
if !ok {
return UploadInfo{}, errInvalidArgument(fmt.Sprintf("Missing part number %d", i))
}
complMultipartUpload.Parts = append(complMultipartUpload.Parts, CompletePart{
ETag: part.ETag,
PartNumber: part.PartNumber,
})
}
// Sort all completed parts.
sort.Sort(completedParts(complMultipartUpload.Parts))
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, PutObjectOptions{})
if err != nil {
return UploadInfo{}, err
}
uploadInfo.Size = totalUploadedSize
return uploadInfo, nil
}

View File

@ -0,0 +1,215 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2021 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"archive/tar"
"bufio"
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"os"
"strings"
"sync"
"time"
"github.com/klauspost/compress/s2"
)
// SnowballOptions contains options for PutObjectsSnowball calls.
type SnowballOptions struct {
// Opts is options applied to all objects.
Opts PutObjectOptions
// Processing options:
// InMemory specifies that all objects should be collected in memory
// before they are uploaded.
// If false a temporary file will be created.
InMemory bool
// Compress enabled content compression before upload.
// Compression will typically reduce memory and network usage,
// Compression can safely be enabled with MinIO hosts.
Compress bool
}
// SnowballObject contains information about a single object to be added to the snowball.
type SnowballObject struct {
// Key is the destination key, including prefix.
Key string
// Size is the content size of this object.
Size int64
// Modtime to apply to the object.
ModTime time.Time
// Content of the object.
// Exactly 'Size' number of bytes must be provided.
Content io.Reader
// Close will be called when an object has finished processing.
// Note that if PutObjectsSnowball returns because of an error,
// objects not consumed from the input will NOT have been closed.
// Leave as nil for no callback.
Close func()
}
type nopReadSeekCloser struct {
io.ReadSeeker
}
func (n nopReadSeekCloser) Close() error {
return nil
}
// This is available as io.ReadSeekCloser from go1.16
type readSeekCloser interface {
io.Reader
io.Closer
io.Seeker
}
// PutObjectsSnowball will put multiple objects with a single put call.
// A (compressed) TAR file will be created which will contain multiple objects.
// The key for each object will be used for the destination in the specified bucket.
// Total size should be < 5TB.
// This function blocks until 'objs' is closed and the content has been uploaded.
func (c Client) PutObjectsSnowball(ctx context.Context, bucketName string, opts SnowballOptions, objs <-chan SnowballObject) (err error) {
err = opts.Opts.validate()
if err != nil {
return err
}
var tmpWriter io.Writer
var getTmpReader func() (rc readSeekCloser, sz int64, err error)
if opts.InMemory {
b := bytes.NewBuffer(nil)
tmpWriter = b
getTmpReader = func() (readSeekCloser, int64, error) {
return nopReadSeekCloser{bytes.NewReader(b.Bytes())}, int64(b.Len()), nil
}
} else {
f, err := ioutil.TempFile("", "s3-putsnowballobjects-*")
if err != nil {
return err
}
name := f.Name()
tmpWriter = f
var once sync.Once
defer once.Do(func() {
f.Close()
})
defer os.Remove(name)
getTmpReader = func() (readSeekCloser, int64, error) {
once.Do(func() {
f.Close()
})
f, err := os.Open(name)
if err != nil {
return nil, 0, err
}
st, err := f.Stat()
if err != nil {
return nil, 0, err
}
return f, st.Size(), nil
}
}
flush := func() error { return nil }
if !opts.Compress {
if !opts.InMemory {
// Insert buffer for writes.
buf := bufio.NewWriterSize(tmpWriter, 1<<20)
flush = buf.Flush
tmpWriter = buf
}
} else {
s2c := s2.NewWriter(tmpWriter, s2.WriterBetterCompression())
flush = s2c.Close
defer s2c.Close()
tmpWriter = s2c
}
t := tar.NewWriter(tmpWriter)
objectLoop:
for {
select {
case <-ctx.Done():
return ctx.Err()
case obj, ok := <-objs:
if !ok {
break objectLoop
}
closeObj := func() {}
if obj.Close != nil {
closeObj = obj.Close
}
// Trim accidental slash prefix.
obj.Key = strings.TrimPrefix(obj.Key, "/")
header := tar.Header{
Typeflag: tar.TypeReg,
Name: obj.Key,
Size: obj.Size,
ModTime: obj.ModTime,
Format: tar.FormatPAX,
}
if err := t.WriteHeader(&header); err != nil {
closeObj()
return err
}
n, err := io.Copy(t, obj.Content)
if err != nil {
closeObj()
return err
}
if n != obj.Size {
closeObj()
return io.ErrUnexpectedEOF
}
closeObj()
}
}
// Flush tar
err = t.Flush()
if err != nil {
return err
}
// Flush compression
err = flush()
if err != nil {
return err
}
if opts.Opts.UserMetadata == nil {
opts.Opts.UserMetadata = map[string]string{}
}
opts.Opts.UserMetadata["X-Amz-Meta-Snowball-Auto-Extract"] = "true"
opts.Opts.DisableMultipart = true
rc, sz, err := getTmpReader()
if err != nil {
return err
}
defer rc.Close()
rand := c.random.Uint64()
_, err = c.PutObject(ctx, bucketName, fmt.Sprintf("snowball-upload-%x.tar", rand), rc, sz, opts.Opts)
return err
}

544
vendor/github.com/minio/minio-go/v7/api-remove.go generated vendored Normal file
View File

@ -0,0 +1,544 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"io"
"net/http"
"net/url"
"time"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
//revive:disable
// Deprecated: BucketOptions will be renamed to RemoveBucketOptions in future versions.
type BucketOptions = RemoveBucketOptions
//revive:enable
// RemoveBucketOptions special headers to purge buckets, only
// useful when endpoint is MinIO
type RemoveBucketOptions struct {
ForceDelete bool
}
// RemoveBucketWithOptions deletes the bucket name.
//
// All objects (including all object versions and delete markers)
// in the bucket will be deleted forcibly if bucket options set
// ForceDelete to 'true'.
func (c *Client) RemoveBucketWithOptions(ctx context.Context, bucketName string, opts RemoveBucketOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Build headers.
headers := make(http.Header)
if opts.ForceDelete {
headers.Set(minIOForceDelete, "true")
}
// Execute DELETE on bucket.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
contentSHA256Hex: emptySHA256Hex,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
// Remove the location from cache on a successful delete.
c.bucketLocCache.Delete(bucketName)
return nil
}
// RemoveBucket deletes the bucket name.
//
// All objects (including all object versions and delete markers).
// in the bucket must be deleted before successfully attempting this request.
func (c *Client) RemoveBucket(ctx context.Context, bucketName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
// Execute DELETE on bucket.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
return httpRespToErrorResponse(resp, bucketName, "")
}
}
// Remove the location from cache on a successful delete.
c.bucketLocCache.Delete(bucketName)
return nil
}
// AdvancedRemoveOptions intended for internal use by replication
type AdvancedRemoveOptions struct {
ReplicationDeleteMarker bool
ReplicationStatus ReplicationStatus
ReplicationMTime time.Time
ReplicationRequest bool
}
// RemoveObjectOptions represents options specified by user for RemoveObject call
type RemoveObjectOptions struct {
ForceDelete bool
GovernanceBypass bool
VersionID string
Internal AdvancedRemoveOptions
}
// RemoveObject removes an object from a bucket.
func (c *Client) RemoveObject(ctx context.Context, bucketName, objectName string, opts RemoveObjectOptions) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
res := c.removeObject(ctx, bucketName, objectName, opts)
return res.Err
}
func (c *Client) removeObject(ctx context.Context, bucketName, objectName string, opts RemoveObjectOptions) RemoveObjectResult {
// Get resources properly escaped and lined up before
// using them in http request.
urlValues := make(url.Values)
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
// Build headers.
headers := make(http.Header)
if opts.GovernanceBypass {
// Set the bypass goverenance retention header
headers.Set(amzBypassGovernance, "true")
}
if opts.Internal.ReplicationDeleteMarker {
headers.Set(minIOBucketReplicationDeleteMarker, "true")
}
if !opts.Internal.ReplicationMTime.IsZero() {
headers.Set(minIOBucketSourceMTime, opts.Internal.ReplicationMTime.Format(time.RFC3339Nano))
}
if !opts.Internal.ReplicationStatus.Empty() {
headers.Set(amzBucketReplicationStatus, string(opts.Internal.ReplicationStatus))
}
if opts.Internal.ReplicationRequest {
headers.Set(minIOBucketReplicationRequest, "")
}
if opts.ForceDelete {
headers.Set(minIOForceDelete, "true")
}
// Execute DELETE on objectName.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
objectName: objectName,
contentSHA256Hex: emptySHA256Hex,
queryValues: urlValues,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return RemoveObjectResult{Err: err}
}
if resp != nil {
// if some unexpected error happened and max retry is reached, we want to let client know
if resp.StatusCode != http.StatusNoContent {
err := httpRespToErrorResponse(resp, bucketName, objectName)
return RemoveObjectResult{Err: err}
}
}
// DeleteObject always responds with http '204' even for
// objects which do not exist. So no need to handle them
// specifically.
return RemoveObjectResult{
ObjectName: objectName,
ObjectVersionID: opts.VersionID,
DeleteMarker: resp.Header.Get("x-amz-delete-marker") == "true",
DeleteMarkerVersionID: resp.Header.Get("x-amz-version-id"),
}
}
// RemoveObjectError - container of Multi Delete S3 API error
type RemoveObjectError struct {
ObjectName string
VersionID string
Err error
}
// RemoveObjectResult - container of Multi Delete S3 API result
type RemoveObjectResult struct {
ObjectName string
ObjectVersionID string
DeleteMarker bool
DeleteMarkerVersionID string
Err error
}
// generateRemoveMultiObjects - generate the XML request for remove multi objects request
func generateRemoveMultiObjectsRequest(objects []ObjectInfo) []byte {
delObjects := []deleteObject{}
for _, obj := range objects {
delObjects = append(delObjects, deleteObject{
Key: obj.Key,
VersionID: obj.VersionID,
})
}
xmlBytes, _ := xml.Marshal(deleteMultiObjects{Objects: delObjects, Quiet: false})
return xmlBytes
}
// processRemoveMultiObjectsResponse - parse the remove multi objects web service
// and return the success/failure result status for each object
func processRemoveMultiObjectsResponse(body io.Reader, objects []ObjectInfo, resultCh chan<- RemoveObjectResult) {
// Parse multi delete XML response
rmResult := &deleteMultiObjectsResult{}
err := xmlDecoder(body, rmResult)
if err != nil {
resultCh <- RemoveObjectResult{ObjectName: "", Err: err}
return
}
// Fill deletion that returned success
for _, obj := range rmResult.DeletedObjects {
resultCh <- RemoveObjectResult{
ObjectName: obj.Key,
// Only filled with versioned buckets
ObjectVersionID: obj.VersionID,
DeleteMarker: obj.DeleteMarker,
DeleteMarkerVersionID: obj.DeleteMarkerVersionID,
}
}
// Fill deletion that returned an error.
for _, obj := range rmResult.UnDeletedObjects {
// Version does not exist is not an error ignore and continue.
switch obj.Code {
case "InvalidArgument", "NoSuchVersion":
continue
}
resultCh <- RemoveObjectResult{
ObjectName: obj.Key,
ObjectVersionID: obj.VersionID,
Err: ErrorResponse{
Code: obj.Code,
Message: obj.Message,
},
}
}
}
// RemoveObjectsOptions represents options specified by user for RemoveObjects call
type RemoveObjectsOptions struct {
GovernanceBypass bool
}
// RemoveObjects removes multiple objects from a bucket while
// it is possible to specify objects versions which are received from
// objectsCh. Remove failures are sent back via error channel.
func (c *Client) RemoveObjects(ctx context.Context, bucketName string, objectsCh <-chan ObjectInfo, opts RemoveObjectsOptions) <-chan RemoveObjectError {
errorCh := make(chan RemoveObjectError, 1)
// Validate if bucket name is valid.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(errorCh)
errorCh <- RemoveObjectError{
Err: err,
}
return errorCh
}
// Validate objects channel to be properly allocated.
if objectsCh == nil {
defer close(errorCh)
errorCh <- RemoveObjectError{
Err: errInvalidArgument("Objects channel cannot be nil"),
}
return errorCh
}
resultCh := make(chan RemoveObjectResult, 1)
go c.removeObjects(ctx, bucketName, objectsCh, resultCh, opts)
go func() {
defer close(errorCh)
for res := range resultCh {
// Send only errors to the error channel
if res.Err == nil {
continue
}
errorCh <- RemoveObjectError{
ObjectName: res.ObjectName,
VersionID: res.ObjectVersionID,
Err: res.Err,
}
}
}()
return errorCh
}
// RemoveObjectsWithResult removes multiple objects from a bucket while
// it is possible to specify objects versions which are received from
// objectsCh. Remove results, successes and failures are sent back via
// RemoveObjectResult channel
func (c *Client) RemoveObjectsWithResult(ctx context.Context, bucketName string, objectsCh <-chan ObjectInfo, opts RemoveObjectsOptions) <-chan RemoveObjectResult {
resultCh := make(chan RemoveObjectResult, 1)
// Validate if bucket name is valid.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
defer close(resultCh)
resultCh <- RemoveObjectResult{
Err: err,
}
return resultCh
}
// Validate objects channel to be properly allocated.
if objectsCh == nil {
defer close(resultCh)
resultCh <- RemoveObjectResult{
Err: errInvalidArgument("Objects channel cannot be nil"),
}
return resultCh
}
go c.removeObjects(ctx, bucketName, objectsCh, resultCh, opts)
return resultCh
}
// Return true if the character is within the allowed characters in an XML 1.0 document
// The list of allowed characters can be found here: https://www.w3.org/TR/xml/#charsets
func validXMLChar(r rune) (ok bool) {
return r == 0x09 ||
r == 0x0A ||
r == 0x0D ||
r >= 0x20 && r <= 0xD7FF ||
r >= 0xE000 && r <= 0xFFFD ||
r >= 0x10000 && r <= 0x10FFFF
}
func hasInvalidXMLChar(str string) bool {
for _, s := range str {
if !validXMLChar(s) {
return true
}
}
return false
}
// Generate and call MultiDelete S3 requests based on entries received from objectsCh
func (c *Client) removeObjects(ctx context.Context, bucketName string, objectsCh <-chan ObjectInfo, resultCh chan<- RemoveObjectResult, opts RemoveObjectsOptions) {
maxEntries := 1000
finish := false
urlValues := make(url.Values)
urlValues.Set("delete", "")
// Close result channel when Multi delete finishes.
defer close(resultCh)
// Loop over entries by 1000 and call MultiDelete requests
for {
if finish {
break
}
count := 0
var batch []ObjectInfo
// Try to gather 1000 entries
for object := range objectsCh {
if hasInvalidXMLChar(object.Key) {
// Use single DELETE so the object name will be in the request URL instead of the multi-delete XML document.
removeResult := c.removeObject(ctx, bucketName, object.Key, RemoveObjectOptions{
VersionID: object.VersionID,
GovernanceBypass: opts.GovernanceBypass,
})
if err := removeResult.Err; err != nil {
// Version does not exist is not an error ignore and continue.
switch ToErrorResponse(err).Code {
case "InvalidArgument", "NoSuchVersion":
continue
}
resultCh <- removeResult
}
resultCh <- removeResult
continue
}
batch = append(batch, object)
if count++; count >= maxEntries {
break
}
}
if count == 0 {
// Multi Objects Delete API doesn't accept empty object list, quit immediately
break
}
if count < maxEntries {
// We didn't have 1000 entries, so this is the last batch
finish = true
}
// Build headers.
headers := make(http.Header)
if opts.GovernanceBypass {
// Set the bypass goverenance retention header
headers.Set(amzBypassGovernance, "true")
}
// Generate remove multi objects XML request
removeBytes := generateRemoveMultiObjectsRequest(batch)
// Execute GET on bucket to list objects.
resp, err := c.executeMethod(ctx, http.MethodPost, requestMetadata{
bucketName: bucketName,
queryValues: urlValues,
contentBody: bytes.NewReader(removeBytes),
contentLength: int64(len(removeBytes)),
contentMD5Base64: sumMD5Base64(removeBytes),
contentSHA256Hex: sum256Hex(removeBytes),
customHeader: headers,
})
if resp != nil {
if resp.StatusCode != http.StatusOK {
e := httpRespToErrorResponse(resp, bucketName, "")
resultCh <- RemoveObjectResult{ObjectName: "", Err: e}
}
}
if err != nil {
for _, b := range batch {
resultCh <- RemoveObjectResult{
ObjectName: b.Key,
ObjectVersionID: b.VersionID,
Err: err,
}
}
continue
}
// Process multiobjects remove xml response
processRemoveMultiObjectsResponse(resp.Body, batch, resultCh)
closeResponse(resp)
}
}
// RemoveIncompleteUpload aborts an partially uploaded object.
func (c *Client) RemoveIncompleteUpload(ctx context.Context, bucketName, objectName string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Find multipart upload ids of the object to be aborted.
uploadIDs, err := c.findUploadIDs(ctx, bucketName, objectName)
if err != nil {
return err
}
for _, uploadID := range uploadIDs {
// abort incomplete multipart upload, based on the upload id passed.
err := c.abortMultipartUpload(ctx, bucketName, objectName, uploadID)
if err != nil {
return err
}
}
return nil
}
// abortMultipartUpload aborts a multipart upload for the given
// uploadID, all previously uploaded parts are deleted.
func (c *Client) abortMultipartUpload(ctx context.Context, bucketName, objectName, uploadID string) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
// Initialize url queries.
urlValues := make(url.Values)
urlValues.Set("uploadId", uploadID)
// Execute DELETE on multipart upload.
resp, err := c.executeMethod(ctx, http.MethodDelete, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp != nil {
if resp.StatusCode != http.StatusNoContent {
// Abort has no response body, handle it for any errors.
var errorResponse ErrorResponse
switch resp.StatusCode {
case http.StatusNotFound:
// This is needed specifically for abort and it cannot
// be converged into default case.
errorResponse = ErrorResponse{
Code: "NoSuchUpload",
Message: "The specified multipart upload does not exist.",
BucketName: bucketName,
Key: objectName,
RequestID: resp.Header.Get("x-amz-request-id"),
HostID: resp.Header.Get("x-amz-id-2"),
Region: resp.Header.Get("x-amz-bucket-region"),
}
default:
return httpRespToErrorResponse(resp, bucketName, objectName)
}
return errorResponse
}
}
return nil
}

182
vendor/github.com/minio/minio-go/v7/api-restore.go generated vendored Normal file
View File

@ -0,0 +1,182 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* (C) 2018-2021 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/xml"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/tags"
)
// RestoreType represents the restore request type
type RestoreType string
const (
// RestoreSelect represents the restore SELECT operation
RestoreSelect = RestoreType("SELECT")
)
// TierType represents a retrieval tier
type TierType string
const (
// TierStandard is the standard retrieval tier
TierStandard = TierType("Standard")
// TierBulk is the bulk retrieval tier
TierBulk = TierType("Bulk")
// TierExpedited is the expedited retrieval tier
TierExpedited = TierType("Expedited")
)
// GlacierJobParameters represents the retrieval tier parameter
type GlacierJobParameters struct {
Tier TierType
}
// Encryption contains the type of server-side encryption used during object retrieval
type Encryption struct {
EncryptionType string
KMSContext string
KMSKeyID string `xml:"KMSKeyId"`
}
// MetadataEntry represents a metadata information of the restored object.
type MetadataEntry struct {
Name string
Value string
}
// S3 holds properties of the copy of the archived object
type S3 struct {
AccessControlList *AccessControlList `xml:"AccessControlList,omitempty"`
BucketName string
Prefix string
CannedACL *string `xml:"CannedACL,omitempty"`
Encryption *Encryption `xml:"Encryption,omitempty"`
StorageClass *string `xml:"StorageClass,omitempty"`
Tagging *tags.Tags `xml:"Tagging,omitempty"`
UserMetadata *MetadataEntry `xml:"UserMetadata,omitempty"`
}
// SelectParameters holds the select request parameters
type SelectParameters struct {
XMLName xml.Name `xml:"SelectParameters"`
ExpressionType QueryExpressionType
Expression string
InputSerialization SelectObjectInputSerialization
OutputSerialization SelectObjectOutputSerialization
}
// OutputLocation holds properties of the copy of the archived object
type OutputLocation struct {
XMLName xml.Name `xml:"OutputLocation"`
S3 S3 `xml:"S3"`
}
// RestoreRequest holds properties of the restore object request
type RestoreRequest struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ RestoreRequest"`
Type *RestoreType `xml:"Type,omitempty"`
Tier *TierType `xml:"Tier,omitempty"`
Days *int `xml:"Days,omitempty"`
GlacierJobParameters *GlacierJobParameters `xml:"GlacierJobParameters,omitempty"`
Description *string `xml:"Description,omitempty"`
SelectParameters *SelectParameters `xml:"SelectParameters,omitempty"`
OutputLocation *OutputLocation `xml:"OutputLocation,omitempty"`
}
// SetDays sets the days parameter of the restore request
func (r *RestoreRequest) SetDays(v int) {
r.Days = &v
}
// SetGlacierJobParameters sets the GlacierJobParameters of the restore request
func (r *RestoreRequest) SetGlacierJobParameters(v GlacierJobParameters) {
r.GlacierJobParameters = &v
}
// SetType sets the type of the restore request
func (r *RestoreRequest) SetType(v RestoreType) {
r.Type = &v
}
// SetTier sets the retrieval tier of the restore request
func (r *RestoreRequest) SetTier(v TierType) {
r.Tier = &v
}
// SetDescription sets the description of the restore request
func (r *RestoreRequest) SetDescription(v string) {
r.Description = &v
}
// SetSelectParameters sets SelectParameters of the restore select request
func (r *RestoreRequest) SetSelectParameters(v SelectParameters) {
r.SelectParameters = &v
}
// SetOutputLocation sets the properties of the copy of the archived object
func (r *RestoreRequest) SetOutputLocation(v OutputLocation) {
r.OutputLocation = &v
}
// RestoreObject is a implementation of https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html AWS S3 API
func (c *Client) RestoreObject(ctx context.Context, bucketName, objectName, versionID string, req RestoreRequest) error {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return err
}
restoreRequestBytes, err := xml.Marshal(req)
if err != nil {
return err
}
urlValues := make(url.Values)
urlValues.Set("restore", "")
if versionID != "" {
urlValues.Set("versionId", versionID)
}
// Execute POST on bucket/object.
resp, err := c.executeMethod(ctx, http.MethodPost, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentMD5Base64: sumMD5Base64(restoreRequestBytes),
contentSHA256Hex: sum256Hex(restoreRequestBytes),
contentBody: bytes.NewReader(restoreRequestBytes),
contentLength: int64(len(restoreRequestBytes)),
})
defer closeResponse(resp)
if err != nil {
return err
}
if resp.StatusCode != http.StatusAccepted && resp.StatusCode != http.StatusOK {
return httpRespToErrorResponse(resp, bucketName, "")
}
return nil
}

361
vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go generated vendored Normal file
View File

@ -0,0 +1,361 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"encoding/xml"
"errors"
"io"
"reflect"
"time"
)
// listAllMyBucketsResult container for listBuckets response.
type listAllMyBucketsResult struct {
// Container for one or more buckets.
Buckets struct {
Bucket []BucketInfo
}
Owner owner
}
// owner container for bucket owner information.
type owner struct {
DisplayName string
ID string
}
// CommonPrefix container for prefix response.
type CommonPrefix struct {
Prefix string
}
// ListBucketV2Result container for listObjects response version 2.
type ListBucketV2Result struct {
// A response can contain CommonPrefixes only if you have
// specified a delimiter.
CommonPrefixes []CommonPrefix
// Metadata about each object returned.
Contents []ObjectInfo
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
MaxKeys int64
Name string
// Hold the token that will be sent in the next request to fetch the next group of keys
NextContinuationToken string
ContinuationToken string
Prefix string
// FetchOwner and StartAfter are currently not used
FetchOwner string
StartAfter string
}
// Version is an element in the list object versions response
type Version struct {
ETag string
IsLatest bool
Key string
LastModified time.Time
Owner Owner
Size int64
StorageClass string
VersionID string `xml:"VersionId"`
isDeleteMarker bool
}
// ListVersionsResult is an element in the list object versions response
// and has a special Unmarshaler because we need to preserver the order
// of <Version> and <DeleteMarker> in ListVersionsResult.Versions slice
type ListVersionsResult struct {
Versions []Version
CommonPrefixes []CommonPrefix
Name string
Prefix string
Delimiter string
MaxKeys int64
EncodingType string
IsTruncated bool
KeyMarker string
VersionIDMarker string
NextKeyMarker string
NextVersionIDMarker string
}
// UnmarshalXML is a custom unmarshal code for the response of ListObjectVersions, the custom
// code will unmarshal <Version> and <DeleteMarker> tags and save them in Versions field to
// preserve the lexical order of the listing.
func (l *ListVersionsResult) UnmarshalXML(d *xml.Decoder, start xml.StartElement) (err error) {
for {
// Read tokens from the XML document in a stream.
t, err := d.Token()
if err != nil {
if err == io.EOF {
break
}
return err
}
se, ok := t.(xml.StartElement)
if ok {
tagName := se.Name.Local
switch tagName {
case "Name", "Prefix",
"Delimiter", "EncodingType",
"KeyMarker", "NextKeyMarker":
var s string
if err = d.DecodeElement(&s, &se); err != nil {
return err
}
v := reflect.ValueOf(l).Elem().FieldByName(tagName)
if v.IsValid() {
v.SetString(s)
}
case "VersionIdMarker":
// VersionIdMarker is a special case because of 'Id' instead of 'ID' in field name
var s string
if err = d.DecodeElement(&s, &se); err != nil {
return err
}
l.VersionIDMarker = s
case "NextVersionIdMarker":
// NextVersionIdMarker is a special case because of 'Id' instead of 'ID' in field name
var s string
if err = d.DecodeElement(&s, &se); err != nil {
return err
}
l.NextVersionIDMarker = s
case "IsTruncated": // bool
var b bool
if err = d.DecodeElement(&b, &se); err != nil {
return err
}
l.IsTruncated = b
case "MaxKeys": // int64
var i int64
if err = d.DecodeElement(&i, &se); err != nil {
return err
}
l.MaxKeys = i
case "CommonPrefixes":
var cp CommonPrefix
if err = d.DecodeElement(&cp, &se); err != nil {
return err
}
l.CommonPrefixes = append(l.CommonPrefixes, cp)
case "DeleteMarker", "Version":
var v Version
if err = d.DecodeElement(&v, &se); err != nil {
return err
}
if tagName == "DeleteMarker" {
v.isDeleteMarker = true
}
l.Versions = append(l.Versions, v)
default:
return errors.New("unrecognized option:" + tagName)
}
}
}
return nil
}
// ListBucketResult container for listObjects response.
type ListBucketResult struct {
// A response can contain CommonPrefixes only if you have
// specified a delimiter.
CommonPrefixes []CommonPrefix
// Metadata about each object returned.
Contents []ObjectInfo
Delimiter string
// Encoding type used to encode object keys in the response.
EncodingType string
// A flag that indicates whether or not ListObjects returned all of the results
// that satisfied the search criteria.
IsTruncated bool
Marker string
MaxKeys int64
Name string
// When response is truncated (the IsTruncated element value in
// the response is true), you can use the key name in this field
// as marker in the subsequent request to get next set of objects.
// Object storage lists objects in alphabetical order Note: This
// element is returned only if you have delimiter request
// parameter specified. If response does not include the NextMaker
// and it is truncated, you can use the value of the last Key in
// the response as the marker in the subsequent request to get the
// next set of object keys.
NextMarker string
Prefix string
}
// ListMultipartUploadsResult container for ListMultipartUploads response
type ListMultipartUploadsResult struct {
Bucket string
KeyMarker string
UploadIDMarker string `xml:"UploadIdMarker"`
NextKeyMarker string
NextUploadIDMarker string `xml:"NextUploadIdMarker"`
EncodingType string
MaxUploads int64
IsTruncated bool
Uploads []ObjectMultipartInfo `xml:"Upload"`
Prefix string
Delimiter string
// A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes []CommonPrefix
}
// initiator container for who initiated multipart upload.
type initiator struct {
ID string
DisplayName string
}
// copyObjectResult container for copy object response.
type copyObjectResult struct {
ETag string
LastModified time.Time // time string format "2006-01-02T15:04:05.000Z"
}
// ObjectPart container for particular part of an object.
type ObjectPart struct {
// Part number identifies the part.
PartNumber int
// Date and time the part was uploaded.
LastModified time.Time
// Entity tag returned when the part was uploaded, usually md5sum
// of the part.
ETag string
// Size of the uploaded part data.
Size int64
}
// ListObjectPartsResult container for ListObjectParts response.
type ListObjectPartsResult struct {
Bucket string
Key string
UploadID string `xml:"UploadId"`
Initiator initiator
Owner owner
StorageClass string
PartNumberMarker int
NextPartNumberMarker int
MaxParts int
// Indicates whether the returned list of parts is truncated.
IsTruncated bool
ObjectParts []ObjectPart `xml:"Part"`
EncodingType string
}
// initiateMultipartUploadResult container for InitiateMultiPartUpload
// response.
type initiateMultipartUploadResult struct {
Bucket string
Key string
UploadID string `xml:"UploadId"`
}
// completeMultipartUploadResult container for completed multipart
// upload response.
type completeMultipartUploadResult struct {
Location string
Bucket string
Key string
ETag string
}
// CompletePart sub container lists individual part numbers and their
// md5sum, part of completeMultipartUpload.
type CompletePart struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ Part" json:"-"`
// Part number identifies the part.
PartNumber int
ETag string
}
// completeMultipartUpload container for completing multipart upload.
type completeMultipartUpload struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CompleteMultipartUpload" json:"-"`
Parts []CompletePart `xml:"Part"`
}
// createBucketConfiguration container for bucket configuration.
type createBucketConfiguration struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CreateBucketConfiguration" json:"-"`
Location string `xml:"LocationConstraint"`
}
// deleteObject container for Delete element in MultiObjects Delete XML request
type deleteObject struct {
Key string
VersionID string `xml:"VersionId,omitempty"`
}
// deletedObject container for Deleted element in MultiObjects Delete XML response
type deletedObject struct {
Key string
VersionID string `xml:"VersionId,omitempty"`
// These fields are ignored.
DeleteMarker bool
DeleteMarkerVersionID string `xml:"DeleteMarkerVersionId,omitempty"`
}
// nonDeletedObject container for Error element (failed deletion) in MultiObjects Delete XML response
type nonDeletedObject struct {
Key string
Code string
Message string
VersionID string `xml:"VersionId"`
}
// deletedMultiObjects container for MultiObjects Delete XML request
type deleteMultiObjects struct {
XMLName xml.Name `xml:"Delete"`
Quiet bool
Objects []deleteObject `xml:"Object"`
}
// deletedMultiObjectsResult container for MultiObjects Delete XML response
type deleteMultiObjectsResult struct {
XMLName xml.Name `xml:"DeleteResult"`
DeletedObjects []deletedObject `xml:"Deleted"`
UnDeletedObjects []nonDeletedObject `xml:"Error"`
}

757
vendor/github.com/minio/minio-go/v7/api-select.go generated vendored Normal file
View File

@ -0,0 +1,757 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* (C) 2018-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"encoding/binary"
"encoding/xml"
"errors"
"fmt"
"hash"
"hash/crc32"
"io"
"net/http"
"net/url"
"strings"
"github.com/minio/minio-go/v7/pkg/encrypt"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// CSVFileHeaderInfo - is the parameter for whether to utilize headers.
type CSVFileHeaderInfo string
// Constants for file header info.
const (
CSVFileHeaderInfoNone CSVFileHeaderInfo = "NONE"
CSVFileHeaderInfoIgnore = "IGNORE"
CSVFileHeaderInfoUse = "USE"
)
// SelectCompressionType - is the parameter for what type of compression is
// present
type SelectCompressionType string
// Constants for compression types under select API.
const (
SelectCompressionNONE SelectCompressionType = "NONE"
SelectCompressionGZIP = "GZIP"
SelectCompressionBZIP = "BZIP2"
// Non-standard compression schemes, supported by MinIO hosts:
SelectCompressionZSTD = "ZSTD" // Zstandard compression.
SelectCompressionLZ4 = "LZ4" // LZ4 Stream
SelectCompressionS2 = "S2" // S2 Stream
SelectCompressionSNAPPY = "SNAPPY" // Snappy stream
)
// CSVQuoteFields - is the parameter for how CSV fields are quoted.
type CSVQuoteFields string
// Constants for csv quote styles.
const (
CSVQuoteFieldsAlways CSVQuoteFields = "Always"
CSVQuoteFieldsAsNeeded = "AsNeeded"
)
// QueryExpressionType - is of what syntax the expression is, this should only
// be SQL
type QueryExpressionType string
// Constants for expression type.
const (
QueryExpressionTypeSQL QueryExpressionType = "SQL"
)
// JSONType determines json input serialization type.
type JSONType string
// Constants for JSONTypes.
const (
JSONDocumentType JSONType = "DOCUMENT"
JSONLinesType = "LINES"
)
// ParquetInputOptions parquet input specific options
type ParquetInputOptions struct{}
// CSVInputOptions csv input specific options
type CSVInputOptions struct {
FileHeaderInfo CSVFileHeaderInfo
fileHeaderInfoSet bool
RecordDelimiter string
recordDelimiterSet bool
FieldDelimiter string
fieldDelimiterSet bool
QuoteCharacter string
quoteCharacterSet bool
QuoteEscapeCharacter string
quoteEscapeCharacterSet bool
Comments string
commentsSet bool
}
// SetFileHeaderInfo sets the file header info in the CSV input options
func (c *CSVInputOptions) SetFileHeaderInfo(val CSVFileHeaderInfo) {
c.FileHeaderInfo = val
c.fileHeaderInfoSet = true
}
// SetRecordDelimiter sets the record delimiter in the CSV input options
func (c *CSVInputOptions) SetRecordDelimiter(val string) {
c.RecordDelimiter = val
c.recordDelimiterSet = true
}
// SetFieldDelimiter sets the field delimiter in the CSV input options
func (c *CSVInputOptions) SetFieldDelimiter(val string) {
c.FieldDelimiter = val
c.fieldDelimiterSet = true
}
// SetQuoteCharacter sets the quote character in the CSV input options
func (c *CSVInputOptions) SetQuoteCharacter(val string) {
c.QuoteCharacter = val
c.quoteCharacterSet = true
}
// SetQuoteEscapeCharacter sets the quote escape character in the CSV input options
func (c *CSVInputOptions) SetQuoteEscapeCharacter(val string) {
c.QuoteEscapeCharacter = val
c.quoteEscapeCharacterSet = true
}
// SetComments sets the comments character in the CSV input options
func (c *CSVInputOptions) SetComments(val string) {
c.Comments = val
c.commentsSet = true
}
// MarshalXML - produces the xml representation of the CSV input options struct
func (c CSVInputOptions) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
if err := e.EncodeToken(start); err != nil {
return err
}
if c.FileHeaderInfo != "" || c.fileHeaderInfoSet {
if err := e.EncodeElement(c.FileHeaderInfo, xml.StartElement{Name: xml.Name{Local: "FileHeaderInfo"}}); err != nil {
return err
}
}
if c.RecordDelimiter != "" || c.recordDelimiterSet {
if err := e.EncodeElement(c.RecordDelimiter, xml.StartElement{Name: xml.Name{Local: "RecordDelimiter"}}); err != nil {
return err
}
}
if c.FieldDelimiter != "" || c.fieldDelimiterSet {
if err := e.EncodeElement(c.FieldDelimiter, xml.StartElement{Name: xml.Name{Local: "FieldDelimiter"}}); err != nil {
return err
}
}
if c.QuoteCharacter != "" || c.quoteCharacterSet {
if err := e.EncodeElement(c.QuoteCharacter, xml.StartElement{Name: xml.Name{Local: "QuoteCharacter"}}); err != nil {
return err
}
}
if c.QuoteEscapeCharacter != "" || c.quoteEscapeCharacterSet {
if err := e.EncodeElement(c.QuoteEscapeCharacter, xml.StartElement{Name: xml.Name{Local: "QuoteEscapeCharacter"}}); err != nil {
return err
}
}
if c.Comments != "" || c.commentsSet {
if err := e.EncodeElement(c.Comments, xml.StartElement{Name: xml.Name{Local: "Comments"}}); err != nil {
return err
}
}
return e.EncodeToken(xml.EndElement{Name: start.Name})
}
// CSVOutputOptions csv output specific options
type CSVOutputOptions struct {
QuoteFields CSVQuoteFields
quoteFieldsSet bool
RecordDelimiter string
recordDelimiterSet bool
FieldDelimiter string
fieldDelimiterSet bool
QuoteCharacter string
quoteCharacterSet bool
QuoteEscapeCharacter string
quoteEscapeCharacterSet bool
}
// SetQuoteFields sets the quote field parameter in the CSV output options
func (c *CSVOutputOptions) SetQuoteFields(val CSVQuoteFields) {
c.QuoteFields = val
c.quoteFieldsSet = true
}
// SetRecordDelimiter sets the record delimiter character in the CSV output options
func (c *CSVOutputOptions) SetRecordDelimiter(val string) {
c.RecordDelimiter = val
c.recordDelimiterSet = true
}
// SetFieldDelimiter sets the field delimiter character in the CSV output options
func (c *CSVOutputOptions) SetFieldDelimiter(val string) {
c.FieldDelimiter = val
c.fieldDelimiterSet = true
}
// SetQuoteCharacter sets the quote character in the CSV output options
func (c *CSVOutputOptions) SetQuoteCharacter(val string) {
c.QuoteCharacter = val
c.quoteCharacterSet = true
}
// SetQuoteEscapeCharacter sets the quote escape character in the CSV output options
func (c *CSVOutputOptions) SetQuoteEscapeCharacter(val string) {
c.QuoteEscapeCharacter = val
c.quoteEscapeCharacterSet = true
}
// MarshalXML - produces the xml representation of the CSVOutputOptions struct
func (c CSVOutputOptions) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
if err := e.EncodeToken(start); err != nil {
return err
}
if c.QuoteFields != "" || c.quoteFieldsSet {
if err := e.EncodeElement(c.QuoteFields, xml.StartElement{Name: xml.Name{Local: "QuoteFields"}}); err != nil {
return err
}
}
if c.RecordDelimiter != "" || c.recordDelimiterSet {
if err := e.EncodeElement(c.RecordDelimiter, xml.StartElement{Name: xml.Name{Local: "RecordDelimiter"}}); err != nil {
return err
}
}
if c.FieldDelimiter != "" || c.fieldDelimiterSet {
if err := e.EncodeElement(c.FieldDelimiter, xml.StartElement{Name: xml.Name{Local: "FieldDelimiter"}}); err != nil {
return err
}
}
if c.QuoteCharacter != "" || c.quoteCharacterSet {
if err := e.EncodeElement(c.QuoteCharacter, xml.StartElement{Name: xml.Name{Local: "QuoteCharacter"}}); err != nil {
return err
}
}
if c.QuoteEscapeCharacter != "" || c.quoteEscapeCharacterSet {
if err := e.EncodeElement(c.QuoteEscapeCharacter, xml.StartElement{Name: xml.Name{Local: "QuoteEscapeCharacter"}}); err != nil {
return err
}
}
return e.EncodeToken(xml.EndElement{Name: start.Name})
}
// JSONInputOptions json input specific options
type JSONInputOptions struct {
Type JSONType
typeSet bool
}
// SetType sets the JSON type in the JSON input options
func (j *JSONInputOptions) SetType(typ JSONType) {
j.Type = typ
j.typeSet = true
}
// MarshalXML - produces the xml representation of the JSONInputOptions struct
func (j JSONInputOptions) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
if err := e.EncodeToken(start); err != nil {
return err
}
if j.Type != "" || j.typeSet {
if err := e.EncodeElement(j.Type, xml.StartElement{Name: xml.Name{Local: "Type"}}); err != nil {
return err
}
}
return e.EncodeToken(xml.EndElement{Name: start.Name})
}
// JSONOutputOptions - json output specific options
type JSONOutputOptions struct {
RecordDelimiter string
recordDelimiterSet bool
}
// SetRecordDelimiter sets the record delimiter in the JSON output options
func (j *JSONOutputOptions) SetRecordDelimiter(val string) {
j.RecordDelimiter = val
j.recordDelimiterSet = true
}
// MarshalXML - produces the xml representation of the JSONOutputOptions struct
func (j JSONOutputOptions) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
if err := e.EncodeToken(start); err != nil {
return err
}
if j.RecordDelimiter != "" || j.recordDelimiterSet {
if err := e.EncodeElement(j.RecordDelimiter, xml.StartElement{Name: xml.Name{Local: "RecordDelimiter"}}); err != nil {
return err
}
}
return e.EncodeToken(xml.EndElement{Name: start.Name})
}
// SelectObjectInputSerialization - input serialization parameters
type SelectObjectInputSerialization struct {
CompressionType SelectCompressionType `xml:"CompressionType,omitempty"`
Parquet *ParquetInputOptions `xml:"Parquet,omitempty"`
CSV *CSVInputOptions `xml:"CSV,omitempty"`
JSON *JSONInputOptions `xml:"JSON,omitempty"`
}
// SelectObjectOutputSerialization - output serialization parameters.
type SelectObjectOutputSerialization struct {
CSV *CSVOutputOptions `xml:"CSV,omitempty"`
JSON *JSONOutputOptions `xml:"JSON,omitempty"`
}
// SelectObjectOptions - represents the input select body
type SelectObjectOptions struct {
XMLName xml.Name `xml:"SelectObjectContentRequest" json:"-"`
ServerSideEncryption encrypt.ServerSide `xml:"-"`
Expression string
ExpressionType QueryExpressionType
InputSerialization SelectObjectInputSerialization
OutputSerialization SelectObjectOutputSerialization
RequestProgress struct {
Enabled bool
}
}
// Header returns the http.Header representation of the SelectObject options.
func (o SelectObjectOptions) Header() http.Header {
headers := make(http.Header)
if o.ServerSideEncryption != nil && o.ServerSideEncryption.Type() == encrypt.SSEC {
o.ServerSideEncryption.Marshal(headers)
}
return headers
}
// SelectObjectType - is the parameter which defines what type of object the
// operation is being performed on.
type SelectObjectType string
// Constants for input data types.
const (
SelectObjectTypeCSV SelectObjectType = "CSV"
SelectObjectTypeJSON = "JSON"
SelectObjectTypeParquet = "Parquet"
)
// preludeInfo is used for keeping track of necessary information from the
// prelude.
type preludeInfo struct {
totalLen uint32
headerLen uint32
}
// SelectResults is used for the streaming responses from the server.
type SelectResults struct {
pipeReader *io.PipeReader
resp *http.Response
stats *StatsMessage
progress *ProgressMessage
}
// ProgressMessage is a struct for progress xml message.
type ProgressMessage struct {
XMLName xml.Name `xml:"Progress" json:"-"`
StatsMessage
}
// StatsMessage is a struct for stat xml message.
type StatsMessage struct {
XMLName xml.Name `xml:"Stats" json:"-"`
BytesScanned int64
BytesProcessed int64
BytesReturned int64
}
// messageType represents the type of message.
type messageType string
const (
errorMsg messageType = "error"
commonMsg = "event"
)
// eventType represents the type of event.
type eventType string
// list of event-types returned by Select API.
const (
endEvent eventType = "End"
recordsEvent = "Records"
progressEvent = "Progress"
statsEvent = "Stats"
)
// contentType represents content type of event.
type contentType string
const (
xmlContent contentType = "text/xml"
)
// SelectObjectContent is a implementation of http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectSELECTContent.html AWS S3 API.
func (c *Client) SelectObjectContent(ctx context.Context, bucketName, objectName string, opts SelectObjectOptions) (*SelectResults, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return nil, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return nil, err
}
selectReqBytes, err := xml.Marshal(opts)
if err != nil {
return nil, err
}
urlValues := make(url.Values)
urlValues.Set("select", "")
urlValues.Set("select-type", "2")
// Execute POST on bucket/object.
resp, err := c.executeMethod(ctx, http.MethodPost, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
customHeader: opts.Header(),
contentMD5Base64: sumMD5Base64(selectReqBytes),
contentSHA256Hex: sum256Hex(selectReqBytes),
contentBody: bytes.NewReader(selectReqBytes),
contentLength: int64(len(selectReqBytes)),
})
if err != nil {
return nil, err
}
return NewSelectResults(resp, bucketName)
}
// NewSelectResults creates a Select Result parser that parses the response
// and returns a Reader that will return parsed and assembled select output.
func NewSelectResults(resp *http.Response, bucketName string) (*SelectResults, error) {
if resp.StatusCode != http.StatusOK {
return nil, httpRespToErrorResponse(resp, bucketName, "")
}
pipeReader, pipeWriter := io.Pipe()
streamer := &SelectResults{
resp: resp,
stats: &StatsMessage{},
progress: &ProgressMessage{},
pipeReader: pipeReader,
}
streamer.start(pipeWriter)
return streamer, nil
}
// Close - closes the underlying response body and the stream reader.
func (s *SelectResults) Close() error {
defer closeResponse(s.resp)
return s.pipeReader.Close()
}
// Read - is a reader compatible implementation for SelectObjectContent records.
func (s *SelectResults) Read(b []byte) (n int, err error) {
return s.pipeReader.Read(b)
}
// Stats - information about a request's stats when processing is complete.
func (s *SelectResults) Stats() *StatsMessage {
return s.stats
}
// Progress - information about the progress of a request.
func (s *SelectResults) Progress() *ProgressMessage {
return s.progress
}
// start is the main function that decodes the large byte array into
// several events that are sent through the eventstream.
func (s *SelectResults) start(pipeWriter *io.PipeWriter) {
go func() {
for {
var prelude preludeInfo
headers := make(http.Header)
var err error
// Create CRC code
crc := crc32.New(crc32.IEEETable)
crcReader := io.TeeReader(s.resp.Body, crc)
// Extract the prelude(12 bytes) into a struct to extract relevant information.
prelude, err = processPrelude(crcReader, crc)
if err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
// Extract the headers(variable bytes) into a struct to extract relevant information
if prelude.headerLen > 0 {
if err = extractHeader(io.LimitReader(crcReader, int64(prelude.headerLen)), headers); err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
}
// Get the actual payload length so that the appropriate amount of
// bytes can be read or parsed.
payloadLen := prelude.PayloadLen()
m := messageType(headers.Get("message-type"))
switch m {
case errorMsg:
pipeWriter.CloseWithError(errors.New(headers.Get("error-code") + ":\"" + headers.Get("error-message") + "\""))
closeResponse(s.resp)
return
case commonMsg:
// Get content-type of the payload.
c := contentType(headers.Get("content-type"))
// Get event type of the payload.
e := eventType(headers.Get("event-type"))
// Handle all supported events.
switch e {
case endEvent:
pipeWriter.Close()
closeResponse(s.resp)
return
case recordsEvent:
if _, err = io.Copy(pipeWriter, io.LimitReader(crcReader, payloadLen)); err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
case progressEvent:
switch c {
case xmlContent:
if err = xmlDecoder(io.LimitReader(crcReader, payloadLen), s.progress); err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
default:
pipeWriter.CloseWithError(fmt.Errorf("Unexpected content-type %s sent for event-type %s", c, progressEvent))
closeResponse(s.resp)
return
}
case statsEvent:
switch c {
case xmlContent:
if err = xmlDecoder(io.LimitReader(crcReader, payloadLen), s.stats); err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
default:
pipeWriter.CloseWithError(fmt.Errorf("Unexpected content-type %s sent for event-type %s", c, statsEvent))
closeResponse(s.resp)
return
}
}
}
// Ensures that the full message's CRC is correct and
// that the message is not corrupted
if err := checkCRC(s.resp.Body, crc.Sum32()); err != nil {
pipeWriter.CloseWithError(err)
closeResponse(s.resp)
return
}
}
}()
}
// PayloadLen is a function that calculates the length of the payload.
func (p preludeInfo) PayloadLen() int64 {
return int64(p.totalLen - p.headerLen - 16)
}
// processPrelude is the function that reads the 12 bytes of the prelude and
// ensures the CRC is correct while also extracting relevant information into
// the struct,
func processPrelude(prelude io.Reader, crc hash.Hash32) (preludeInfo, error) {
var err error
pInfo := preludeInfo{}
// reads total length of the message (first 4 bytes)
pInfo.totalLen, err = extractUint32(prelude)
if err != nil {
return pInfo, err
}
// reads total header length of the message (2nd 4 bytes)
pInfo.headerLen, err = extractUint32(prelude)
if err != nil {
return pInfo, err
}
// checks that the CRC is correct (3rd 4 bytes)
preCRC := crc.Sum32()
if err := checkCRC(prelude, preCRC); err != nil {
return pInfo, err
}
return pInfo, nil
}
// extracts the relevant information from the Headers.
func extractHeader(body io.Reader, myHeaders http.Header) error {
for {
// extracts the first part of the header,
headerTypeName, err := extractHeaderType(body)
if err != nil {
// Since end of file, we have read all of our headers
if err == io.EOF {
break
}
return err
}
// reads the 7 present in the header and ignores it.
extractUint8(body)
headerValueName, err := extractHeaderValue(body)
if err != nil {
return err
}
myHeaders.Set(headerTypeName, headerValueName)
}
return nil
}
// extractHeaderType extracts the first half of the header message, the header type.
func extractHeaderType(body io.Reader) (string, error) {
// extracts 2 bit integer
headerNameLen, err := extractUint8(body)
if err != nil {
return "", err
}
// extracts the string with the appropriate number of bytes
headerName, err := extractString(body, int(headerNameLen))
if err != nil {
return "", err
}
return strings.TrimPrefix(headerName, ":"), nil
}
// extractsHeaderValue extracts the second half of the header message, the
// header value
func extractHeaderValue(body io.Reader) (string, error) {
bodyLen, err := extractUint16(body)
if err != nil {
return "", err
}
bodyName, err := extractString(body, int(bodyLen))
if err != nil {
return "", err
}
return bodyName, nil
}
// extracts a string from byte array of a particular number of bytes.
func extractString(source io.Reader, lenBytes int) (string, error) {
myVal := make([]byte, lenBytes)
_, err := source.Read(myVal)
if err != nil {
return "", err
}
return string(myVal), nil
}
// extractUint32 extracts a 4 byte integer from the byte array.
func extractUint32(r io.Reader) (uint32, error) {
buf := make([]byte, 4)
_, err := readFull(r, buf)
if err != nil {
return 0, err
}
return binary.BigEndian.Uint32(buf), nil
}
// extractUint16 extracts a 2 byte integer from the byte array.
func extractUint16(r io.Reader) (uint16, error) {
buf := make([]byte, 2)
_, err := readFull(r, buf)
if err != nil {
return 0, err
}
return binary.BigEndian.Uint16(buf), nil
}
// extractUint8 extracts a 1 byte integer from the byte array.
func extractUint8(r io.Reader) (uint8, error) {
buf := make([]byte, 1)
_, err := readFull(r, buf)
if err != nil {
return 0, err
}
return buf[0], nil
}
// checkCRC ensures that the CRC matches with the one from the reader.
func checkCRC(r io.Reader, expect uint32) error {
msgCRC, err := extractUint32(r)
if err != nil {
return err
}
if msgCRC != expect {
return fmt.Errorf("Checksum Mismatch, MessageCRC of 0x%X does not equal expected CRC of 0x%X", msgCRC, expect)
}
return nil
}

115
vendor/github.com/minio/minio-go/v7/api-stat.go generated vendored Normal file
View File

@ -0,0 +1,115 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2020 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"net/http"
"net/url"
"github.com/minio/minio-go/v7/pkg/s3utils"
)
// BucketExists verifies if bucket exists and you have permission to access it. Allows for a Context to
// control cancellations and timeouts.
func (c *Client) BucketExists(ctx context.Context, bucketName string) (bool, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return false, err
}
// Execute HEAD on bucketName.
resp, err := c.executeMethod(ctx, http.MethodHead, requestMetadata{
bucketName: bucketName,
contentSHA256Hex: emptySHA256Hex,
})
defer closeResponse(resp)
if err != nil {
if ToErrorResponse(err).Code == "NoSuchBucket" {
return false, nil
}
return false, err
}
if resp != nil {
resperr := httpRespToErrorResponse(resp, bucketName, "")
if ToErrorResponse(resperr).Code == "NoSuchBucket" {
return false, nil
}
if resp.StatusCode != http.StatusOK {
return false, httpRespToErrorResponse(resp, bucketName, "")
}
}
return true, nil
}
// StatObject verifies if object exists and you have permission to access.
func (c *Client) StatObject(ctx context.Context, bucketName, objectName string, opts StatObjectOptions) (ObjectInfo, error) {
// Input validation.
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return ObjectInfo{}, err
}
if err := s3utils.CheckValidObjectName(objectName); err != nil {
return ObjectInfo{}, err
}
headers := opts.Header()
if opts.Internal.ReplicationDeleteMarker {
headers.Set(minIOBucketReplicationDeleteMarker, "true")
}
urlValues := make(url.Values)
if opts.VersionID != "" {
urlValues.Set("versionId", opts.VersionID)
}
// Execute HEAD on objectName.
resp, err := c.executeMethod(ctx, http.MethodHead, requestMetadata{
bucketName: bucketName,
objectName: objectName,
queryValues: urlValues,
contentSHA256Hex: emptySHA256Hex,
customHeader: headers,
})
defer closeResponse(resp)
if err != nil {
return ObjectInfo{}, err
}
if resp != nil {
deleteMarker := resp.Header.Get(amzDeleteMarker) == "true"
if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusPartialContent {
if resp.StatusCode == http.StatusMethodNotAllowed && opts.VersionID != "" && deleteMarker {
errResp := ErrorResponse{
StatusCode: resp.StatusCode,
Code: "MethodNotAllowed",
Message: "The specified method is not allowed against this resource.",
BucketName: bucketName,
Key: objectName,
}
return ObjectInfo{
VersionID: resp.Header.Get(amzVersionID),
IsDeleteMarker: deleteMarker,
}, errResp
}
return ObjectInfo{
VersionID: resp.Header.Get(amzVersionID),
IsDeleteMarker: deleteMarker,
}, httpRespToErrorResponse(resp, bucketName, objectName)
}
}
return ToObjectInfo(bucketName, objectName, resp.Header)
}

930
vendor/github.com/minio/minio-go/v7/api.go generated vendored Normal file
View File

@ -0,0 +1,930 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2018 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"bytes"
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/http/cookiejar"
"net/http/httputil"
"net/url"
"os"
"runtime"
"strings"
"sync"
"sync/atomic"
"time"
md5simd "github.com/minio/md5-simd"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/signer"
"golang.org/x/net/publicsuffix"
)
// Client implements Amazon S3 compatible methods.
type Client struct {
// Standard options.
// Parsed endpoint url provided by the user.
endpointURL *url.URL
// Holds various credential providers.
credsProvider *credentials.Credentials
// Custom signerType value overrides all credentials.
overrideSignerType credentials.SignatureType
// User supplied.
appInfo struct {
appName string
appVersion string
}
// Indicate whether we are using https or not
secure bool
// Needs allocation.
httpClient *http.Client
bucketLocCache *bucketLocationCache
// Advanced functionality.
isTraceEnabled bool
traceErrorsOnly bool
traceOutput io.Writer
// S3 specific accelerated endpoint.
s3AccelerateEndpoint string
// Region endpoint
region string
// Random seed.
random *rand.Rand
// lookup indicates type of url lookup supported by server. If not specified,
// default to Auto.
lookup BucketLookupType
// Factory for MD5 hash functions.
md5Hasher func() md5simd.Hasher
sha256Hasher func() md5simd.Hasher
healthStatus int32
}
// Options for New method
type Options struct {
Creds *credentials.Credentials
Secure bool
Transport http.RoundTripper
Region string
BucketLookup BucketLookupType
// Custom hash routines. Leave nil to use standard.
CustomMD5 func() md5simd.Hasher
CustomSHA256 func() md5simd.Hasher
}
// Global constants.
const (
libraryName = "minio-go"
libraryVersion = "v7.0.29"
)
// User Agent should always following the below style.
// Please open an issue to discuss any new changes here.
//
// MinIO (OS; ARCH) LIB/VER APP/VER
const (
libraryUserAgentPrefix = "MinIO (" + runtime.GOOS + "; " + runtime.GOARCH + ") "
libraryUserAgent = libraryUserAgentPrefix + libraryName + "/" + libraryVersion
)
// BucketLookupType is type of url lookup supported by server.
type BucketLookupType int
// Different types of url lookup supported by the server.Initialized to BucketLookupAuto
const (
BucketLookupAuto BucketLookupType = iota
BucketLookupDNS
BucketLookupPath
)
// New - instantiate minio client with options
func New(endpoint string, opts *Options) (*Client, error) {
if opts == nil {
return nil, errors.New("no options provided")
}
clnt, err := privateNew(endpoint, opts)
if err != nil {
return nil, err
}
// Google cloud storage should be set to signature V2, force it if not.
if s3utils.IsGoogleEndpoint(*clnt.endpointURL) {
clnt.overrideSignerType = credentials.SignatureV2
}
// If Amazon S3 set to signature v4.
if s3utils.IsAmazonEndpoint(*clnt.endpointURL) {
clnt.overrideSignerType = credentials.SignatureV4
}
return clnt, nil
}
// EndpointURL returns the URL of the S3 endpoint.
func (c *Client) EndpointURL() *url.URL {
endpoint := *c.endpointURL // copy to prevent callers from modifying internal state
return &endpoint
}
// lockedRandSource provides protected rand source, implements rand.Source interface.
type lockedRandSource struct {
lk sync.Mutex
src rand.Source
}
// Int63 returns a non-negative pseudo-random 63-bit integer as an int64.
func (r *lockedRandSource) Int63() (n int64) {
r.lk.Lock()
n = r.src.Int63()
r.lk.Unlock()
return
}
// Seed uses the provided seed value to initialize the generator to a
// deterministic state.
func (r *lockedRandSource) Seed(seed int64) {
r.lk.Lock()
r.src.Seed(seed)
r.lk.Unlock()
}
func privateNew(endpoint string, opts *Options) (*Client, error) {
// construct endpoint.
endpointURL, err := getEndpointURL(endpoint, opts.Secure)
if err != nil {
return nil, err
}
// Initialize cookies to preserve server sent cookies if any and replay
// them upon each request.
jar, err := cookiejar.New(&cookiejar.Options{PublicSuffixList: publicsuffix.List})
if err != nil {
return nil, err
}
// instantiate new Client.
clnt := new(Client)
// Save the credentials.
clnt.credsProvider = opts.Creds
// Remember whether we are using https or not
clnt.secure = opts.Secure
// Save endpoint URL, user agent for future uses.
clnt.endpointURL = endpointURL
transport := opts.Transport
if transport == nil {
transport, err = DefaultTransport(opts.Secure)
if err != nil {
return nil, err
}
}
// Instantiate http client and bucket location cache.
clnt.httpClient = &http.Client{
Jar: jar,
Transport: transport,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
// Sets custom region, if region is empty bucket location cache is used automatically.
if opts.Region == "" {
opts.Region = s3utils.GetRegionFromURL(*clnt.endpointURL)
}
clnt.region = opts.Region
// Instantiate bucket location cache.
clnt.bucketLocCache = newBucketLocationCache()
// Introduce a new locked random seed.
clnt.random = rand.New(&lockedRandSource{src: rand.NewSource(time.Now().UTC().UnixNano())})
// Add default md5 hasher.
clnt.md5Hasher = opts.CustomMD5
clnt.sha256Hasher = opts.CustomSHA256
if clnt.md5Hasher == nil {
clnt.md5Hasher = newMd5Hasher
}
if clnt.sha256Hasher == nil {
clnt.sha256Hasher = newSHA256Hasher
}
// Sets bucket lookup style, whether server accepts DNS or Path lookup. Default is Auto - determined
// by the SDK. When Auto is specified, DNS lookup is used for Amazon/Google cloud endpoints and Path for all other endpoints.
clnt.lookup = opts.BucketLookup
// healthcheck is not initialized
clnt.healthStatus = unknown
// Return.
return clnt, nil
}
// SetAppInfo - add application details to user agent.
func (c *Client) SetAppInfo(appName string, appVersion string) {
// if app name and version not set, we do not set a new user agent.
if appName != "" && appVersion != "" {
c.appInfo.appName = appName
c.appInfo.appVersion = appVersion
}
}
// TraceOn - enable HTTP tracing.
func (c *Client) TraceOn(outputStream io.Writer) {
// if outputStream is nil then default to os.Stdout.
if outputStream == nil {
outputStream = os.Stdout
}
// Sets a new output stream.
c.traceOutput = outputStream
// Enable tracing.
c.isTraceEnabled = true
}
// TraceErrorsOnlyOn - same as TraceOn, but only errors will be traced.
func (c *Client) TraceErrorsOnlyOn(outputStream io.Writer) {
c.TraceOn(outputStream)
c.traceErrorsOnly = true
}
// TraceErrorsOnlyOff - Turns off the errors only tracing and everything will be traced after this call.
// If all tracing needs to be turned off, call TraceOff().
func (c *Client) TraceErrorsOnlyOff() {
c.traceErrorsOnly = false
}
// TraceOff - disable HTTP tracing.
func (c *Client) TraceOff() {
// Disable tracing.
c.isTraceEnabled = false
c.traceErrorsOnly = false
}
// SetS3TransferAccelerate - turns s3 accelerated endpoint on or off for all your
// requests. This feature is only specific to S3 for all other endpoints this
// function does nothing. To read further details on s3 transfer acceleration
// please vist -
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
func (c *Client) SetS3TransferAccelerate(accelerateEndpoint string) {
if s3utils.IsAmazonEndpoint(*c.endpointURL) {
c.s3AccelerateEndpoint = accelerateEndpoint
}
}
// Hash materials provides relevant initialized hash algo writers
// based on the expected signature type.
//
// - For signature v4 request if the connection is insecure compute only sha256.
// - For signature v4 request if the connection is secure compute only md5.
// - For anonymous request compute md5.
func (c *Client) hashMaterials(isMd5Requested bool) (hashAlgos map[string]md5simd.Hasher, hashSums map[string][]byte) {
hashSums = make(map[string][]byte)
hashAlgos = make(map[string]md5simd.Hasher)
if c.overrideSignerType.IsV4() {
if c.secure {
hashAlgos["md5"] = c.md5Hasher()
} else {
hashAlgos["sha256"] = c.sha256Hasher()
}
} else {
if c.overrideSignerType.IsAnonymous() {
hashAlgos["md5"] = c.md5Hasher()
}
}
if isMd5Requested {
hashAlgos["md5"] = c.md5Hasher()
}
return hashAlgos, hashSums
}
const (
unknown = -1
offline = 0
online = 1
)
// IsOnline returns true if healthcheck enabled and client is online
func (c *Client) IsOnline() bool {
return !c.IsOffline()
}
// sets online healthStatus to offline
func (c *Client) markOffline() {
atomic.CompareAndSwapInt32(&c.healthStatus, online, offline)
}
// IsOffline returns true if healthcheck enabled and client is offline
func (c *Client) IsOffline() bool {
return atomic.LoadInt32(&c.healthStatus) == offline
}
// HealthCheck starts a healthcheck to see if endpoint is up. Returns a context cancellation function
// and and error if health check is already started
func (c *Client) HealthCheck(hcDuration time.Duration) (context.CancelFunc, error) {
if atomic.LoadInt32(&c.healthStatus) == online {
return nil, fmt.Errorf("health check is running")
}
if hcDuration < 1*time.Second {
return nil, fmt.Errorf("health check duration should be atleast 1 second")
}
ctx, cancelFn := context.WithCancel(context.Background())
atomic.StoreInt32(&c.healthStatus, online)
probeBucketName := randString(60, rand.NewSource(time.Now().UnixNano()), "probe-health-")
go func(duration time.Duration) {
timer := time.NewTimer(duration)
defer timer.Stop()
for {
select {
case <-ctx.Done():
atomic.StoreInt32(&c.healthStatus, unknown)
return
case <-timer.C:
timer.Reset(duration)
// Do health check the first time and ONLY if the connection is marked offline
if c.IsOffline() {
gctx, gcancel := context.WithTimeout(context.Background(), 3*time.Second)
_, err := c.getBucketLocation(gctx, probeBucketName)
gcancel()
if IsNetworkOrHostDown(err, false) {
// Still network errors do not need to do anything.
continue
}
switch ToErrorResponse(err).Code {
case "NoSuchBucket", "AccessDenied", "":
atomic.CompareAndSwapInt32(&c.healthStatus, offline, online)
}
}
}
}
}(hcDuration)
return cancelFn, nil
}
// requestMetadata - is container for all the values to make a request.
type requestMetadata struct {
// If set newRequest presigns the URL.
presignURL bool
// User supplied.
bucketName string
objectName string
queryValues url.Values
customHeader http.Header
extraPresignHeader http.Header
expires int64
// Generated by our internal code.
bucketLocation string
contentBody io.Reader
contentLength int64
contentMD5Base64 string // carries base64 encoded md5sum
contentSHA256Hex string // carries hex encoded sha256sum
}
// dumpHTTP - dump HTTP request and response.
func (c *Client) dumpHTTP(req *http.Request, resp *http.Response) error {
// Starts http dump.
_, err := fmt.Fprintln(c.traceOutput, "---------START-HTTP---------")
if err != nil {
return err
}
// Filter out Signature field from Authorization header.
origAuth := req.Header.Get("Authorization")
if origAuth != "" {
req.Header.Set("Authorization", redactSignature(origAuth))
}
// Only display request header.
reqTrace, err := httputil.DumpRequestOut(req, false)
if err != nil {
return err
}
// Write request to trace output.
_, err = fmt.Fprint(c.traceOutput, string(reqTrace))
if err != nil {
return err
}
// Only display response header.
var respTrace []byte
// For errors we make sure to dump response body as well.
if resp.StatusCode != http.StatusOK &&
resp.StatusCode != http.StatusPartialContent &&
resp.StatusCode != http.StatusNoContent {
respTrace, err = httputil.DumpResponse(resp, true)
if err != nil {
return err
}
} else {
respTrace, err = httputil.DumpResponse(resp, false)
if err != nil {
return err
}
}
// Write response to trace output.
_, err = fmt.Fprint(c.traceOutput, strings.TrimSuffix(string(respTrace), "\r\n"))
if err != nil {
return err
}
// Ends the http dump.
_, err = fmt.Fprintln(c.traceOutput, "---------END-HTTP---------")
if err != nil {
return err
}
// Returns success.
return nil
}
// do - execute http request.
func (c *Client) do(req *http.Request) (resp *http.Response, err error) {
defer func() {
if IsNetworkOrHostDown(err, false) {
c.markOffline()
}
}()
resp, err = c.httpClient.Do(req)
if err != nil {
// Handle this specifically for now until future Golang versions fix this issue properly.
if urlErr, ok := err.(*url.Error); ok {
if strings.Contains(urlErr.Err.Error(), "EOF") {
return nil, &url.Error{
Op: urlErr.Op,
URL: urlErr.URL,
Err: errors.New("Connection closed by foreign host " + urlErr.URL + ". Retry again."),
}
}
}
return nil, err
}
// Response cannot be non-nil, report error if thats the case.
if resp == nil {
msg := "Response is empty. " + reportIssue
return nil, errInvalidArgument(msg)
}
// If trace is enabled, dump http request and response,
// except when the traceErrorsOnly enabled and the response's status code is ok
if c.isTraceEnabled && !(c.traceErrorsOnly && resp.StatusCode == http.StatusOK) {
err = c.dumpHTTP(req, resp)
if err != nil {
return nil, err
}
}
return resp, nil
}
// List of success status.
var successStatus = []int{
http.StatusOK,
http.StatusNoContent,
http.StatusPartialContent,
}
// executeMethod - instantiates a given method, and retries the
// request upon any error up to maxRetries attempts in a binomially
// delayed manner using a standard back off algorithm.
func (c *Client) executeMethod(ctx context.Context, method string, metadata requestMetadata) (res *http.Response, err error) {
if c.IsOffline() {
return nil, errors.New(c.endpointURL.String() + " is offline.")
}
var retryable bool // Indicates if request can be retried.
var bodySeeker io.Seeker // Extracted seeker from io.Reader.
reqRetry := MaxRetry // Indicates how many times we can retry the request
if metadata.contentBody != nil {
// Check if body is seekable then it is retryable.
bodySeeker, retryable = metadata.contentBody.(io.Seeker)
switch bodySeeker {
case os.Stdin, os.Stdout, os.Stderr:
retryable = false
}
// Retry only when reader is seekable
if !retryable {
reqRetry = 1
}
// Figure out if the body can be closed - if yes
// we will definitely close it upon the function
// return.
bodyCloser, ok := metadata.contentBody.(io.Closer)
if ok {
defer bodyCloser.Close()
}
}
// Create cancel context to control 'newRetryTimer' go routine.
retryCtx, cancel := context.WithCancel(ctx)
// Indicate to our routine to exit cleanly upon return.
defer cancel()
for range c.newRetryTimer(retryCtx, reqRetry, DefaultRetryUnit, DefaultRetryCap, MaxJitter) {
// Retry executes the following function body if request has an
// error until maxRetries have been exhausted, retry attempts are
// performed after waiting for a given period of time in a
// binomial fashion.
if retryable {
// Seek back to beginning for each attempt.
if _, err = bodySeeker.Seek(0, 0); err != nil {
// If seek failed, no need to retry.
return nil, err
}
}
// Instantiate a new request.
var req *http.Request
req, err = c.newRequest(ctx, method, metadata)
if err != nil {
errResponse := ToErrorResponse(err)
if isS3CodeRetryable(errResponse.Code) {
continue // Retry.
}
return nil, err
}
// Initiate the request.
res, err = c.do(req)
if err != nil {
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return nil, err
}
// Retry the request
continue
}
// For any known successful http status, return quickly.
for _, httpStatus := range successStatus {
if httpStatus == res.StatusCode {
return res, nil
}
}
// Read the body to be saved later.
errBodyBytes, err := ioutil.ReadAll(res.Body)
// res.Body should be closed
closeResponse(res)
if err != nil {
return nil, err
}
// Save the body.
errBodySeeker := bytes.NewReader(errBodyBytes)
res.Body = ioutil.NopCloser(errBodySeeker)
// For errors verify if its retryable otherwise fail quickly.
errResponse := ToErrorResponse(httpRespToErrorResponse(res, metadata.bucketName, metadata.objectName))
// Save the body back again.
errBodySeeker.Seek(0, 0) // Seek back to starting point.
res.Body = ioutil.NopCloser(errBodySeeker)
// Bucket region if set in error response and the error
// code dictates invalid region, we can retry the request
// with the new region.
//
// Additionally we should only retry if bucketLocation and custom
// region is empty.
if c.region == "" {
switch errResponse.Code {
case "AuthorizationHeaderMalformed":
fallthrough
case "InvalidRegion":
fallthrough
case "AccessDenied":
if errResponse.Region == "" {
// Region is empty we simply return the error.
return res, err
}
// Region is not empty figure out a way to
// handle this appropriately.
if metadata.bucketName != "" {
// Gather Cached location only if bucketName is present.
if location, cachedOk := c.bucketLocCache.Get(metadata.bucketName); cachedOk && location != errResponse.Region {
c.bucketLocCache.Set(metadata.bucketName, errResponse.Region)
continue // Retry.
}
} else {
// This is for ListBuckets() fallback.
if errResponse.Region != metadata.bucketLocation {
// Retry if the error response has a different region
// than the request we just made.
metadata.bucketLocation = errResponse.Region
continue // Retry
}
}
}
}
// Verify if error response code is retryable.
if isS3CodeRetryable(errResponse.Code) {
continue // Retry.
}
// Verify if http status code is retryable.
if isHTTPStatusRetryable(res.StatusCode) {
continue // Retry.
}
// For all other cases break out of the retry loop.
break
}
// Return an error when retry is canceled or deadlined
if e := retryCtx.Err(); e != nil {
return nil, e
}
return res, err
}
// newRequest - instantiate a new HTTP request for a given method.
func (c *Client) newRequest(ctx context.Context, method string, metadata requestMetadata) (req *http.Request, err error) {
// If no method is supplied default to 'POST'.
if method == "" {
method = http.MethodPost
}
location := metadata.bucketLocation
if location == "" {
if metadata.bucketName != "" {
// Gather location only if bucketName is present.
location, err = c.getBucketLocation(ctx, metadata.bucketName)
if err != nil {
return nil, err
}
}
if location == "" {
location = getDefaultLocation(*c.endpointURL, c.region)
}
}
// Look if target url supports virtual host.
// We explicitly disallow MakeBucket calls to not use virtual DNS style,
// since the resolution may fail.
isMakeBucket := (metadata.objectName == "" && method == http.MethodPut && len(metadata.queryValues) == 0)
isVirtualHost := c.isVirtualHostStyleRequest(*c.endpointURL, metadata.bucketName) && !isMakeBucket
// Construct a new target URL.
targetURL, err := c.makeTargetURL(metadata.bucketName, metadata.objectName, location,
isVirtualHost, metadata.queryValues)
if err != nil {
return nil, err
}
// Initialize a new HTTP request for the method.
req, err = http.NewRequestWithContext(ctx, method, targetURL.String(), nil)
if err != nil {
return nil, err
}
// Get credentials from the configured credentials provider.
value, err := c.credsProvider.Get()
if err != nil {
return nil, err
}
var (
signerType = value.SignerType
accessKeyID = value.AccessKeyID
secretAccessKey = value.SecretAccessKey
sessionToken = value.SessionToken
)
// Custom signer set then override the behavior.
if c.overrideSignerType != credentials.SignatureDefault {
signerType = c.overrideSignerType
}
// If signerType returned by credentials helper is anonymous,
// then do not sign regardless of signerType override.
if value.SignerType == credentials.SignatureAnonymous {
signerType = credentials.SignatureAnonymous
}
// Generate presign url if needed, return right here.
if metadata.expires != 0 && metadata.presignURL {
if signerType.IsAnonymous() {
return nil, errInvalidArgument("Presigned URLs cannot be generated with anonymous credentials.")
}
if metadata.extraPresignHeader != nil {
if signerType.IsV2() {
return nil, errInvalidArgument("Extra signed headers for Presign with Signature V2 is not supported.")
}
for k, v := range metadata.extraPresignHeader {
req.Header.Set(k, v[0])
}
}
if signerType.IsV2() {
// Presign URL with signature v2.
req = signer.PreSignV2(*req, accessKeyID, secretAccessKey, metadata.expires, isVirtualHost)
} else if signerType.IsV4() {
// Presign URL with signature v4.
req = signer.PreSignV4(*req, accessKeyID, secretAccessKey, sessionToken, location, metadata.expires)
}
return req, nil
}
// Set 'User-Agent' header for the request.
c.setUserAgent(req)
// Set all headers.
for k, v := range metadata.customHeader {
req.Header.Set(k, v[0])
}
// Go net/http notoriously closes the request body.
// - The request Body, if non-nil, will be closed by the underlying Transport, even on errors.
// This can cause underlying *os.File seekers to fail, avoid that
// by making sure to wrap the closer as a nop.
if metadata.contentLength == 0 {
req.Body = nil
} else {
req.Body = ioutil.NopCloser(metadata.contentBody)
}
// Set incoming content-length.
req.ContentLength = metadata.contentLength
if req.ContentLength <= -1 {
// For unknown content length, we upload using transfer-encoding: chunked.
req.TransferEncoding = []string{"chunked"}
}
// set md5Sum for content protection.
if len(metadata.contentMD5Base64) > 0 {
req.Header.Set("Content-Md5", metadata.contentMD5Base64)
}
// For anonymous requests just return.
if signerType.IsAnonymous() {
return req, nil
}
switch {
case signerType.IsV2():
// Add signature version '2' authorization header.
req = signer.SignV2(*req, accessKeyID, secretAccessKey, isVirtualHost)
case metadata.objectName != "" && metadata.queryValues == nil && method == http.MethodPut && metadata.customHeader.Get("X-Amz-Copy-Source") == "" && !c.secure:
// Streaming signature is used by default for a PUT object request. Additionally we also
// look if the initialized client is secure, if yes then we don't need to perform
// streaming signature.
req = signer.StreamingSignV4(req, accessKeyID,
secretAccessKey, sessionToken, location, metadata.contentLength, time.Now().UTC())
default:
// Set sha256 sum for signature calculation only with signature version '4'.
shaHeader := unsignedPayload
if metadata.contentSHA256Hex != "" {
shaHeader = metadata.contentSHA256Hex
}
req.Header.Set("X-Amz-Content-Sha256", shaHeader)
// Add signature version '4' authorization header.
req = signer.SignV4(*req, accessKeyID, secretAccessKey, sessionToken, location)
}
// Return request.
return req, nil
}
// set User agent.
func (c *Client) setUserAgent(req *http.Request) {
req.Header.Set("User-Agent", libraryUserAgent)
if c.appInfo.appName != "" && c.appInfo.appVersion != "" {
req.Header.Set("User-Agent", libraryUserAgent+" "+c.appInfo.appName+"/"+c.appInfo.appVersion)
}
}
// makeTargetURL make a new target url.
func (c *Client) makeTargetURL(bucketName, objectName, bucketLocation string, isVirtualHostStyle bool, queryValues url.Values) (*url.URL, error) {
host := c.endpointURL.Host
// For Amazon S3 endpoint, try to fetch location based endpoint.
if s3utils.IsAmazonEndpoint(*c.endpointURL) {
if c.s3AccelerateEndpoint != "" && bucketName != "" {
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
// Disable transfer acceleration for non-compliant bucket names.
if strings.Contains(bucketName, ".") {
return nil, errTransferAccelerationBucket(bucketName)
}
// If transfer acceleration is requested set new host.
// For more details about enabling transfer acceleration read here.
// http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
host = c.s3AccelerateEndpoint
} else {
// Do not change the host if the endpoint URL is a FIPS S3 endpoint or a S3 PrivateLink interface endpoint
if !s3utils.IsAmazonFIPSEndpoint(*c.endpointURL) && !s3utils.IsAmazonPrivateLinkEndpoint(*c.endpointURL) {
// Fetch new host based on the bucket location.
host = getS3Endpoint(bucketLocation)
}
}
}
// Save scheme.
scheme := c.endpointURL.Scheme
// Strip port 80 and 443 so we won't send these ports in Host header.
// The reason is that browsers and curl automatically remove :80 and :443
// with the generated presigned urls, then a signature mismatch error.
if h, p, err := net.SplitHostPort(host); err == nil {
if scheme == "http" && p == "80" || scheme == "https" && p == "443" {
host = h
if ip := net.ParseIP(h); ip != nil && ip.To16() != nil {
host = "[" + h + "]"
}
}
}
urlStr := scheme + "://" + host + "/"
// Make URL only if bucketName is available, otherwise use the
// endpoint URL.
if bucketName != "" {
// If endpoint supports virtual host style use that always.
// Currently only S3 and Google Cloud Storage would support
// virtual host style.
if isVirtualHostStyle {
urlStr = scheme + "://" + bucketName + "." + host + "/"
if objectName != "" {
urlStr += s3utils.EncodePath(objectName)
}
} else {
// If not fall back to using path style.
urlStr = urlStr + bucketName + "/"
if objectName != "" {
urlStr += s3utils.EncodePath(objectName)
}
}
}
// If there are any query values, add them to the end.
if len(queryValues) > 0 {
urlStr = urlStr + "?" + s3utils.QueryEncode(queryValues)
}
return url.Parse(urlStr)
}
// returns true if virtual hosted style requests are to be used.
func (c *Client) isVirtualHostStyleRequest(url url.URL, bucketName string) bool {
if bucketName == "" {
return false
}
if c.lookup == BucketLookupDNS {
return true
}
if c.lookup == BucketLookupPath {
return false
}
// default to virtual only for Amazon/Google storage. In all other cases use
// path style requests
return s3utils.IsVirtualHostSupported(url, bucketName)
}

259
vendor/github.com/minio/minio-go/v7/bucket-cache.go generated vendored Normal file
View File

@ -0,0 +1,259 @@
/*
* MinIO Go Library for Amazon S3 Compatible Cloud Storage
* Copyright 2015-2017 MinIO, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package minio
import (
"context"
"net"
"net/http"
"net/url"
"path"
"sync"
"github.com/minio/minio-go/v7/pkg/credentials"
"github.com/minio/minio-go/v7/pkg/s3utils"
"github.com/minio/minio-go/v7/pkg/signer"
)
// bucketLocationCache - Provides simple mechanism to hold bucket
// locations in memory.
type bucketLocationCache struct {
// mutex is used for handling the concurrent
// read/write requests for cache.
sync.RWMutex
// items holds the cached bucket locations.
items map[string]string
}
// newBucketLocationCache - Provides a new bucket location cache to be
// used internally with the client object.
func newBucketLocationCache() *bucketLocationCache {
return &bucketLocationCache{
items: make(map[string]string),
}
}
// Get - Returns a value of a given key if it exists.
func (r *bucketLocationCache) Get(bucketName string) (location string, ok bool) {
r.RLock()
defer r.RUnlock()
location, ok = r.items[bucketName]
return
}
// Set - Will persist a value into cache.
func (r *bucketLocationCache) Set(bucketName string, location string) {
r.Lock()
defer r.Unlock()
r.items[bucketName] = location
}
// Delete - Deletes a bucket name from cache.
func (r *bucketLocationCache) Delete(bucketName string) {
r.Lock()
defer r.Unlock()
delete(r.items, bucketName)
}
// GetBucketLocation - get location for the bucket name from location cache, if not
// fetch freshly by making a new request.
func (c *Client) GetBucketLocation(ctx context.Context, bucketName string) (string, error) {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
return c.getBucketLocation(ctx, bucketName)
}
// getBucketLocation - Get location for the bucketName from location map cache, if not
// fetch freshly by making a new request.
func (c *Client) getBucketLocation(ctx context.Context, bucketName string) (string, error) {
if err := s3utils.CheckValidBucketName(bucketName); err != nil {
return "", err
}
// Region set then no need to fetch bucket location.
if c.region != "" {
return c.region, nil
}
if location, ok := c.bucketLocCache.Get(bucketName); ok {
return location, nil
}
// Initialize a new request.
req, err := c.getBucketLocationRequest(ctx, bucketName)
if err != nil {
return "", err
}
// Initiate the request.
resp, err := c.do(req)
defer closeResponse(resp)
if err != nil {
return "", err
}
location, err := processBucketLocationResponse(resp, bucketName)
if err != nil {
return "", err
}
c.bucketLocCache.Set(bucketName, location)
return location, nil
}
// processes the getBucketLocation http response from the server.
func processBucketLocationResponse(resp *http.Response, bucketName string) (bucketLocation string, err error) {
if resp != nil {
if resp.StatusCode != http.StatusOK {
err = httpRespToErrorResponse(resp, bucketName, "")
errResp := ToErrorResponse(err)
// For access denied error, it could be an anonymous
// request. Move forward and let the top level callers
// succeed if possible based on their policy.
switch errResp.Code {
case "NotImplemented":
switch errResp.Server {
case "AmazonSnowball":
return "snowball", nil
case "cloudflare":
return "us-east-1", nil
}
case "AuthorizationHeaderMalformed":
fallthrough
case "InvalidRegion":
fallthrough
case "AccessDenied":
if errResp.Region == "" {
return "us-east-1", nil
}
return errResp.Region, nil
}
return "", err
}
}
// Extract location.
var locationConstraint string
err = xmlDecoder(resp.Body, &locationConstraint)
if err != nil {
return "", err
}
location := locationConstraint
// Location is empty will be 'us-east-1'.
if location == "" {
location = "us-east-1"
}
// Location can be 'EU' convert it to meaningful 'eu-west-1'.
if location == "EU" {
location = "eu-west-1"
}
// Save the location into cache.
// Return.
return location, nil
}
// getBucketLocationRequest - Wrapper creates a new getBucketLocation request.
func (c *Client) getBucketLocationRequest(ctx context.Context, bucketName string) (*http.Request, error) {
// Set location query.
urlValues := make(url.Values)
urlValues.Set("location", "")
// Set get bucket location always as path style.
targetURL := *c.endpointURL
// as it works in makeTargetURL method from api.go file
if h, p, err := net.SplitHostPort(targetURL.Host); err == nil {
if targetURL.Scheme == "http" && p == "80" || targetURL.Scheme == "https" && p == "443" {
targetURL.Host = h
if ip := net.ParseIP(h); ip != nil && ip.To16() != nil {
targetURL.Host = "[" + h + "]"
}
}
}
isVirtualHost := s3utils.IsVirtualHostSupported(targetURL, bucketName)
var urlStr string
// only support Aliyun OSS for virtual hosted path, compatible Amazon & Google Endpoint
if isVirtualHost && s3utils.IsAliyunOSSEndpoint(targetURL) {
urlStr = c.endpointURL.Scheme + "://" + bucketName + "." + targetURL.Host + "/?location"
} else {
targetURL.Path = path.Join(bucketName, "") + "/"
targetURL.RawQuery = urlValues.Encode()
urlStr = targetURL.String()
}
// Get a new HTTP request for the method.
req, err := http.NewRequestWithContext(ctx, http.MethodGet, urlStr, nil)
if err != nil {
return nil, err
}
// Set UserAgent for the request.
c.setUserAgent(req)
// Get credentials from the configured credentials provider.
value, err := c.credsProvider.Get()
if err != nil {
return nil, err
}
var (
signerType = value.SignerType
accessKeyID = value.AccessKeyID
secretAccessKey = value.SecretAccessKey
sessionToken = value.SessionToken
)
// Custom signer set then override the behavior.
if c.overrideSignerType != credentials.SignatureDefault {
signerType = c.overrideSignerType
}
// If signerType returned by credentials helper is anonymous,
// then do not sign regardless of signerType override.
if value.SignerType == credentials.SignatureAnonymous {
signerType = credentials.SignatureAnonymous
}
if signerType.IsAnonymous() {
return req, nil
}
if signerType.IsV2() {
// Get Bucket Location calls should be always path style
isVirtualHost := false
req = signer.SignV2(*req, accessKeyID, secretAccessKey, isVirtualHost)
return req, nil
}
// Set sha256 sum for signature calculation only with signature version '4'.
contentSha256 := emptySHA256Hex
if c.secure {
contentSha256 = unsignedPayload
}
req.Header.Set("X-Amz-Content-Sha256", contentSha256)
req = signer.SignV4(*req, accessKeyID, secretAccessKey, sessionToken, "us-east-1")
return req, nil
}

Some files were not shown because too many files have changed in this diff Show More