mirror of
https://github.com/superseriousbusiness/gotosocial
synced 2025-06-05 21:59:39 +02:00
[chore] update viper version (#2539)
* update viper version * removes our last uses of the slice package * fix tests
This commit is contained in:
11
vendor/github.com/sourcegraph/conc/.golangci.yml
generated
vendored
Normal file
11
vendor/github.com/sourcegraph/conc/.golangci.yml
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
linters:
|
||||
disable-all: true
|
||||
enable:
|
||||
- errcheck
|
||||
- godot
|
||||
- gosimple
|
||||
- govet
|
||||
- ineffassign
|
||||
- staticcheck
|
||||
- typecheck
|
||||
- unused
|
21
vendor/github.com/sourcegraph/conc/LICENSE
generated
vendored
Normal file
21
vendor/github.com/sourcegraph/conc/LICENSE
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Sourcegraph
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
464
vendor/github.com/sourcegraph/conc/README.md
generated
vendored
Normal file
464
vendor/github.com/sourcegraph/conc/README.md
generated
vendored
Normal file
@ -0,0 +1,464 @@
|
||||

|
||||
|
||||
# `conc`: better structured concurrency for go
|
||||
|
||||
[](https://pkg.go.dev/github.com/sourcegraph/conc)
|
||||
[](https://sourcegraph.com/github.com/sourcegraph/conc)
|
||||
[](https://goreportcard.com/report/github.com/sourcegraph/conc)
|
||||
[](https://codecov.io/gh/sourcegraph/conc)
|
||||
[](https://discord.gg/bvXQXmtRjN)
|
||||
|
||||
`conc` is your toolbelt for structured concurrency in go, making common tasks
|
||||
easier and safer.
|
||||
|
||||
```sh
|
||||
go get github.com/sourcegraph/conc
|
||||
```
|
||||
|
||||
# At a glance
|
||||
|
||||
- Use [`conc.WaitGroup`](https://pkg.go.dev/github.com/sourcegraph/conc#WaitGroup) if you just want a safer version of `sync.WaitGroup`
|
||||
- Use [`pool.Pool`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#Pool) if you want a concurrency-limited task runner
|
||||
- Use [`pool.ResultPool`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#ResultPool) if you want a concurrent task runner that collects task results
|
||||
- Use [`pool.(Result)?ErrorPool`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#ErrorPool) if your tasks are fallible
|
||||
- Use [`pool.(Result)?ContextPool`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#ContextPool) if your tasks should be canceled on failure
|
||||
- Use [`stream.Stream`](https://pkg.go.dev/github.com/sourcegraph/conc/stream#Stream) if you want to process an ordered stream of tasks in parallel with serial callbacks
|
||||
- Use [`iter.Map`](https://pkg.go.dev/github.com/sourcegraph/conc/iter#Map) if you want to concurrently map a slice
|
||||
- Use [`iter.ForEach`](https://pkg.go.dev/github.com/sourcegraph/conc/iter#ForEach) if you want to concurrently iterate over a slice
|
||||
- Use [`panics.Catcher`](https://pkg.go.dev/github.com/sourcegraph/conc/panics#Catcher) if you want to catch panics in your own goroutines
|
||||
|
||||
All pools are created with
|
||||
[`pool.New()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#New)
|
||||
or
|
||||
[`pool.NewWithResults[T]()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#NewWithResults),
|
||||
then configured with methods:
|
||||
|
||||
- [`p.WithMaxGoroutines()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#Pool.MaxGoroutines) configures the maximum number of goroutines in the pool
|
||||
- [`p.WithErrors()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#Pool.WithErrors) configures the pool to run tasks that return errors
|
||||
- [`p.WithContext(ctx)`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#Pool.WithContext) configures the pool to run tasks that should be canceled on first error
|
||||
- [`p.WithFirstError()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#ErrorPool.WithFirstError) configures error pools to only keep the first returned error rather than an aggregated error
|
||||
- [`p.WithCollectErrored()`](https://pkg.go.dev/github.com/sourcegraph/conc/pool#ResultContextPool.WithCollectErrored) configures result pools to collect results even when the task errored
|
||||
|
||||
# Goals
|
||||
|
||||
The main goals of the package are:
|
||||
1) Make it harder to leak goroutines
|
||||
2) Handle panics gracefully
|
||||
3) Make concurrent code easier to read
|
||||
|
||||
## Goal #1: Make it harder to leak goroutines
|
||||
|
||||
A common pain point when working with goroutines is cleaning them up. It's
|
||||
really easy to fire off a `go` statement and fail to properly wait for it to
|
||||
complete.
|
||||
|
||||
`conc` takes the opinionated stance that all concurrency should be scoped.
|
||||
That is, goroutines should have an owner and that owner should always
|
||||
ensure that its owned goroutines exit properly.
|
||||
|
||||
In `conc`, the owner of a goroutine is always a `conc.WaitGroup`. Goroutines
|
||||
are spawned in a `WaitGroup` with `(*WaitGroup).Go()`, and
|
||||
`(*WaitGroup).Wait()` should always be called before the `WaitGroup` goes out
|
||||
of scope.
|
||||
|
||||
In some cases, you might want a spawned goroutine to outlast the scope of the
|
||||
caller. In that case, you could pass a `WaitGroup` into the spawning function.
|
||||
|
||||
```go
|
||||
func main() {
|
||||
var wg conc.WaitGroup
|
||||
defer wg.Wait()
|
||||
|
||||
startTheThing(&wg)
|
||||
}
|
||||
|
||||
func startTheThing(wg *conc.WaitGroup) {
|
||||
wg.Go(func() { ... })
|
||||
}
|
||||
```
|
||||
|
||||
For some more discussion on why scoped concurrency is nice, check out [this
|
||||
blog
|
||||
post](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/).
|
||||
|
||||
## Goal #2: Handle panics gracefully
|
||||
|
||||
A frequent problem with goroutines in long-running applications is handling
|
||||
panics. A goroutine spawned without a panic handler will crash the whole process
|
||||
on panic. This is usually undesirable.
|
||||
|
||||
However, if you do add a panic handler to a goroutine, what do you do with the
|
||||
panic once you catch it? Some options:
|
||||
1) Ignore it
|
||||
2) Log it
|
||||
3) Turn it into an error and return that to the goroutine spawner
|
||||
4) Propagate the panic to the goroutine spawner
|
||||
|
||||
Ignoring panics is a bad idea since panics usually mean there is actually
|
||||
something wrong and someone should fix it.
|
||||
|
||||
Just logging panics isn't great either because then there is no indication to the spawner
|
||||
that something bad happened, and it might just continue on as normal even though your
|
||||
program is in a really bad state.
|
||||
|
||||
Both (3) and (4) are reasonable options, but both require the goroutine to have
|
||||
an owner that can actually receive the message that something went wrong. This
|
||||
is generally not true with a goroutine spawned with `go`, but in the `conc`
|
||||
package, all goroutines have an owner that must collect the spawned goroutine.
|
||||
In the conc package, any call to `Wait()` will panic if any of the spawned goroutines
|
||||
panicked. Additionally, it decorates the panic value with a stacktrace from the child
|
||||
goroutine so that you don't lose information about what caused the panic.
|
||||
|
||||
Doing this all correctly every time you spawn something with `go` is not
|
||||
trivial and it requires a lot of boilerplate that makes the important parts of
|
||||
the code more difficult to read, so `conc` does this for you.
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
type caughtPanicError struct {
|
||||
val any
|
||||
stack []byte
|
||||
}
|
||||
|
||||
func (e *caughtPanicError) Error() string {
|
||||
return fmt.Sprintf(
|
||||
"panic: %q\n%s",
|
||||
e.val,
|
||||
string(e.stack)
|
||||
)
|
||||
}
|
||||
|
||||
func main() {
|
||||
done := make(chan error)
|
||||
go func() {
|
||||
defer func() {
|
||||
if v := recover(); v != nil {
|
||||
done <- &caughtPanicError{
|
||||
val: v,
|
||||
stack: debug.Stack()
|
||||
}
|
||||
} else {
|
||||
done <- nil
|
||||
}
|
||||
}()
|
||||
doSomethingThatMightPanic()
|
||||
}()
|
||||
err := <-done
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func main() {
|
||||
var wg conc.WaitGroup
|
||||
wg.Go(doSomethingThatMightPanic)
|
||||
// panics with a nice stacktrace
|
||||
wg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Goal #3: Make concurrent code easier to read
|
||||
|
||||
Doing concurrency correctly is difficult. Doing it in a way that doesn't
|
||||
obfuscate what the code is actually doing is more difficult. The `conc` package
|
||||
attempts to make common operations easier by abstracting as much boilerplate
|
||||
complexity as possible.
|
||||
|
||||
Want to run a set of concurrent tasks with a bounded set of goroutines? Use
|
||||
`pool.New()`. Want to process an ordered stream of results concurrently, but
|
||||
still maintain order? Try `stream.New()`. What about a concurrent map over
|
||||
a slice? Take a peek at `iter.Map()`.
|
||||
|
||||
Browse some examples below for some comparisons with doing these by hand.
|
||||
|
||||
# Examples
|
||||
|
||||
Each of these examples forgoes propagating panics for simplicity. To see
|
||||
what kind of complexity that would add, check out the "Goal #2" header above.
|
||||
|
||||
Spawn a set of goroutines and waiting for them to finish:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func main() {
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
// crashes on panic!
|
||||
doSomething()
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func main() {
|
||||
var wg conc.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Go(doSomething)
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Process each element of a stream in a static pool of goroutines:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func process(stream chan int) {
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for elem := range stream {
|
||||
handle(elem)
|
||||
}
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func process(stream chan int) {
|
||||
p := pool.New().WithMaxGoroutines(10)
|
||||
for elem := range stream {
|
||||
elem := elem
|
||||
p.Go(func() {
|
||||
handle(elem)
|
||||
})
|
||||
}
|
||||
p.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Process each element of a slice in a static pool of goroutines:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func process(values []int) {
|
||||
feeder := make(chan int, 8)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for elem := range feeder {
|
||||
handle(elem)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
for _, value := range values {
|
||||
feeder <- value
|
||||
}
|
||||
close(feeder)
|
||||
wg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func process(values []int) {
|
||||
iter.ForEach(values, handle)
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Concurrently map a slice:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func concMap(
|
||||
input []int,
|
||||
f func(int) int,
|
||||
) []int {
|
||||
res := make([]int, len(input))
|
||||
var idx atomic.Int64
|
||||
|
||||
var wg sync.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
|
||||
for {
|
||||
i := int(idx.Add(1) - 1)
|
||||
if i >= len(input) {
|
||||
return
|
||||
}
|
||||
|
||||
res[i] = f(input[i])
|
||||
}
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
return res
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func concMap(
|
||||
input []int,
|
||||
f func(*int) int,
|
||||
) []int {
|
||||
return iter.Map(input, f)
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Process an ordered stream concurrently:
|
||||
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th><code>stdlib</code></th>
|
||||
<th><code>conc</code></th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func mapStream(
|
||||
in chan int,
|
||||
out chan int,
|
||||
f func(int) int,
|
||||
) {
|
||||
tasks := make(chan func())
|
||||
taskResults := make(chan chan int)
|
||||
|
||||
// Worker goroutines
|
||||
var workerWg sync.WaitGroup
|
||||
for i := 0; i < 10; i++ {
|
||||
workerWg.Add(1)
|
||||
go func() {
|
||||
defer workerWg.Done()
|
||||
for task := range tasks {
|
||||
task()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Ordered reader goroutines
|
||||
var readerWg sync.WaitGroup
|
||||
readerWg.Add(1)
|
||||
go func() {
|
||||
defer readerWg.Done()
|
||||
for result := range taskResults {
|
||||
item := <-result
|
||||
out <- item
|
||||
}
|
||||
}()
|
||||
|
||||
// Feed the workers with tasks
|
||||
for elem := range in {
|
||||
resultCh := make(chan int, 1)
|
||||
taskResults <- resultCh
|
||||
tasks <- func() {
|
||||
resultCh <- f(elem)
|
||||
}
|
||||
}
|
||||
|
||||
// We've exhausted input.
|
||||
// Wait for everything to finish
|
||||
close(tasks)
|
||||
workerWg.Wait()
|
||||
close(taskResults)
|
||||
readerWg.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
<td>
|
||||
|
||||
```go
|
||||
func mapStream(
|
||||
in chan int,
|
||||
out chan int,
|
||||
f func(int) int,
|
||||
) {
|
||||
s := stream.New().WithMaxGoroutines(10)
|
||||
for elem := range in {
|
||||
elem := elem
|
||||
s.Go(func() stream.Callback {
|
||||
res := f(elem)
|
||||
return func() { out <- res }
|
||||
})
|
||||
}
|
||||
s.Wait()
|
||||
}
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
# Status
|
||||
|
||||
This package is currently pre-1.0. There are likely to be minor breaking
|
||||
changes before a 1.0 release as we stabilize the APIs and tweak defaults.
|
||||
Please open an issue if you have questions, concerns, or requests that you'd
|
||||
like addressed before the 1.0 release. Currently, a 1.0 is targeted for
|
||||
March 2023.
|
10
vendor/github.com/sourcegraph/conc/internal/multierror/multierror_go119.go
generated
vendored
Normal file
10
vendor/github.com/sourcegraph/conc/internal/multierror/multierror_go119.go
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
//go:build !go1.20
|
||||
// +build !go1.20
|
||||
|
||||
package multierror
|
||||
|
||||
import "go.uber.org/multierr"
|
||||
|
||||
var (
|
||||
Join = multierr.Combine
|
||||
)
|
10
vendor/github.com/sourcegraph/conc/internal/multierror/multierror_go120.go
generated
vendored
Normal file
10
vendor/github.com/sourcegraph/conc/internal/multierror/multierror_go120.go
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
//go:build go1.20
|
||||
// +build go1.20
|
||||
|
||||
package multierror
|
||||
|
||||
import "errors"
|
||||
|
||||
var (
|
||||
Join = errors.Join
|
||||
)
|
85
vendor/github.com/sourcegraph/conc/iter/iter.go
generated
vendored
Normal file
85
vendor/github.com/sourcegraph/conc/iter/iter.go
generated
vendored
Normal file
@ -0,0 +1,85 @@
|
||||
package iter
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/sourcegraph/conc"
|
||||
)
|
||||
|
||||
// defaultMaxGoroutines returns the default maximum number of
|
||||
// goroutines to use within this package.
|
||||
func defaultMaxGoroutines() int { return runtime.GOMAXPROCS(0) }
|
||||
|
||||
// Iterator can be used to configure the behaviour of ForEach
|
||||
// and ForEachIdx. The zero value is safe to use with reasonable
|
||||
// defaults.
|
||||
//
|
||||
// Iterator is also safe for reuse and concurrent use.
|
||||
type Iterator[T any] struct {
|
||||
// MaxGoroutines controls the maximum number of goroutines
|
||||
// to use on this Iterator's methods.
|
||||
//
|
||||
// If unset, MaxGoroutines defaults to runtime.GOMAXPROCS(0).
|
||||
MaxGoroutines int
|
||||
}
|
||||
|
||||
// ForEach executes f in parallel over each element in input.
|
||||
//
|
||||
// It is safe to mutate the input parameter, which makes it
|
||||
// possible to map in place.
|
||||
//
|
||||
// ForEach always uses at most runtime.GOMAXPROCS goroutines.
|
||||
// It takes roughly 2µs to start up the goroutines and adds
|
||||
// an overhead of roughly 50ns per element of input. For
|
||||
// a configurable goroutine limit, use a custom Iterator.
|
||||
func ForEach[T any](input []T, f func(*T)) { Iterator[T]{}.ForEach(input, f) }
|
||||
|
||||
// ForEach executes f in parallel over each element in input,
|
||||
// using up to the Iterator's configured maximum number of
|
||||
// goroutines.
|
||||
//
|
||||
// It is safe to mutate the input parameter, which makes it
|
||||
// possible to map in place.
|
||||
//
|
||||
// It takes roughly 2µs to start up the goroutines and adds
|
||||
// an overhead of roughly 50ns per element of input.
|
||||
func (iter Iterator[T]) ForEach(input []T, f func(*T)) {
|
||||
iter.ForEachIdx(input, func(_ int, t *T) {
|
||||
f(t)
|
||||
})
|
||||
}
|
||||
|
||||
// ForEachIdx is the same as ForEach except it also provides the
|
||||
// index of the element to the callback.
|
||||
func ForEachIdx[T any](input []T, f func(int, *T)) { Iterator[T]{}.ForEachIdx(input, f) }
|
||||
|
||||
// ForEachIdx is the same as ForEach except it also provides the
|
||||
// index of the element to the callback.
|
||||
func (iter Iterator[T]) ForEachIdx(input []T, f func(int, *T)) {
|
||||
if iter.MaxGoroutines == 0 {
|
||||
// iter is a value receiver and is hence safe to mutate
|
||||
iter.MaxGoroutines = defaultMaxGoroutines()
|
||||
}
|
||||
|
||||
numInput := len(input)
|
||||
if iter.MaxGoroutines > numInput {
|
||||
// No more concurrent tasks than the number of input items.
|
||||
iter.MaxGoroutines = numInput
|
||||
}
|
||||
|
||||
var idx atomic.Int64
|
||||
// Create the task outside the loop to avoid extra closure allocations.
|
||||
task := func() {
|
||||
i := int(idx.Add(1) - 1)
|
||||
for ; i < numInput; i = int(idx.Add(1) - 1) {
|
||||
f(i, &input[i])
|
||||
}
|
||||
}
|
||||
|
||||
var wg conc.WaitGroup
|
||||
for i := 0; i < iter.MaxGoroutines; i++ {
|
||||
wg.Go(task)
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
65
vendor/github.com/sourcegraph/conc/iter/map.go
generated
vendored
Normal file
65
vendor/github.com/sourcegraph/conc/iter/map.go
generated
vendored
Normal file
@ -0,0 +1,65 @@
|
||||
package iter
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/sourcegraph/conc/internal/multierror"
|
||||
)
|
||||
|
||||
// Mapper is an Iterator with a result type R. It can be used to configure
|
||||
// the behaviour of Map and MapErr. The zero value is safe to use with
|
||||
// reasonable defaults.
|
||||
//
|
||||
// Mapper is also safe for reuse and concurrent use.
|
||||
type Mapper[T, R any] Iterator[T]
|
||||
|
||||
// Map applies f to each element of input, returning the mapped result.
|
||||
//
|
||||
// Map always uses at most runtime.GOMAXPROCS goroutines. For a configurable
|
||||
// goroutine limit, use a custom Mapper.
|
||||
func Map[T, R any](input []T, f func(*T) R) []R {
|
||||
return Mapper[T, R]{}.Map(input, f)
|
||||
}
|
||||
|
||||
// Map applies f to each element of input, returning the mapped result.
|
||||
//
|
||||
// Map uses up to the configured Mapper's maximum number of goroutines.
|
||||
func (m Mapper[T, R]) Map(input []T, f func(*T) R) []R {
|
||||
res := make([]R, len(input))
|
||||
Iterator[T](m).ForEachIdx(input, func(i int, t *T) {
|
||||
res[i] = f(t)
|
||||
})
|
||||
return res
|
||||
}
|
||||
|
||||
// MapErr applies f to each element of the input, returning the mapped result
|
||||
// and a combined error of all returned errors.
|
||||
//
|
||||
// Map always uses at most runtime.GOMAXPROCS goroutines. For a configurable
|
||||
// goroutine limit, use a custom Mapper.
|
||||
func MapErr[T, R any](input []T, f func(*T) (R, error)) ([]R, error) {
|
||||
return Mapper[T, R]{}.MapErr(input, f)
|
||||
}
|
||||
|
||||
// MapErr applies f to each element of the input, returning the mapped result
|
||||
// and a combined error of all returned errors.
|
||||
//
|
||||
// Map uses up to the configured Mapper's maximum number of goroutines.
|
||||
func (m Mapper[T, R]) MapErr(input []T, f func(*T) (R, error)) ([]R, error) {
|
||||
var (
|
||||
res = make([]R, len(input))
|
||||
errMux sync.Mutex
|
||||
errs error
|
||||
)
|
||||
Iterator[T](m).ForEachIdx(input, func(i int, t *T) {
|
||||
var err error
|
||||
res[i], err = f(t)
|
||||
if err != nil {
|
||||
errMux.Lock()
|
||||
// TODO: use stdlib errors once multierrors land in go 1.20
|
||||
errs = multierror.Join(errs, err)
|
||||
errMux.Unlock()
|
||||
}
|
||||
})
|
||||
return res, errs
|
||||
}
|
102
vendor/github.com/sourcegraph/conc/panics/panics.go
generated
vendored
Normal file
102
vendor/github.com/sourcegraph/conc/panics/panics.go
generated
vendored
Normal file
@ -0,0 +1,102 @@
|
||||
package panics
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"runtime/debug"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
// Catcher is used to catch panics. You can execute a function with Try,
|
||||
// which will catch any spawned panic. Try can be called any number of times,
|
||||
// from any number of goroutines. Once all calls to Try have completed, you can
|
||||
// get the value of the first panic (if any) with Recovered(), or you can just
|
||||
// propagate the panic (re-panic) with Repanic().
|
||||
type Catcher struct {
|
||||
recovered atomic.Pointer[Recovered]
|
||||
}
|
||||
|
||||
// Try executes f, catching any panic it might spawn. It is safe
|
||||
// to call from multiple goroutines simultaneously.
|
||||
func (p *Catcher) Try(f func()) {
|
||||
defer p.tryRecover()
|
||||
f()
|
||||
}
|
||||
|
||||
func (p *Catcher) tryRecover() {
|
||||
if val := recover(); val != nil {
|
||||
rp := NewRecovered(1, val)
|
||||
p.recovered.CompareAndSwap(nil, &rp)
|
||||
}
|
||||
}
|
||||
|
||||
// Repanic panics if any calls to Try caught a panic. It will panic with the
|
||||
// value of the first panic caught, wrapped in a panics.Recovered with caller
|
||||
// information.
|
||||
func (p *Catcher) Repanic() {
|
||||
if val := p.Recovered(); val != nil {
|
||||
panic(val)
|
||||
}
|
||||
}
|
||||
|
||||
// Recovered returns the value of the first panic caught by Try, or nil if
|
||||
// no calls to Try panicked.
|
||||
func (p *Catcher) Recovered() *Recovered {
|
||||
return p.recovered.Load()
|
||||
}
|
||||
|
||||
// NewRecovered creates a panics.Recovered from a panic value and a collected
|
||||
// stacktrace. The skip parameter allows the caller to skip stack frames when
|
||||
// collecting the stacktrace. Calling with a skip of 0 means include the call to
|
||||
// NewRecovered in the stacktrace.
|
||||
func NewRecovered(skip int, value any) Recovered {
|
||||
// 64 frames should be plenty
|
||||
var callers [64]uintptr
|
||||
n := runtime.Callers(skip+1, callers[:])
|
||||
return Recovered{
|
||||
Value: value,
|
||||
Callers: callers[:n],
|
||||
Stack: debug.Stack(),
|
||||
}
|
||||
}
|
||||
|
||||
// Recovered is a panic that was caught with recover().
|
||||
type Recovered struct {
|
||||
// The original value of the panic.
|
||||
Value any
|
||||
// The caller list as returned by runtime.Callers when the panic was
|
||||
// recovered. Can be used to produce a more detailed stack information with
|
||||
// runtime.CallersFrames.
|
||||
Callers []uintptr
|
||||
// The formatted stacktrace from the goroutine where the panic was recovered.
|
||||
// Easier to use than Callers.
|
||||
Stack []byte
|
||||
}
|
||||
|
||||
// String renders a human-readable formatting of the panic.
|
||||
func (p *Recovered) String() string {
|
||||
return fmt.Sprintf("panic: %v\nstacktrace:\n%s\n", p.Value, p.Stack)
|
||||
}
|
||||
|
||||
// AsError casts the panic into an error implementation. The implementation
|
||||
// is unwrappable with the cause of the panic, if the panic was provided one.
|
||||
func (p *Recovered) AsError() error {
|
||||
if p == nil {
|
||||
return nil
|
||||
}
|
||||
return &ErrRecovered{*p}
|
||||
}
|
||||
|
||||
// ErrRecovered wraps a panics.Recovered in an error implementation.
|
||||
type ErrRecovered struct{ Recovered }
|
||||
|
||||
var _ error = (*ErrRecovered)(nil)
|
||||
|
||||
func (p *ErrRecovered) Error() string { return p.String() }
|
||||
|
||||
func (p *ErrRecovered) Unwrap() error {
|
||||
if err, ok := p.Value.(error); ok {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
11
vendor/github.com/sourcegraph/conc/panics/try.go
generated
vendored
Normal file
11
vendor/github.com/sourcegraph/conc/panics/try.go
generated
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
package panics
|
||||
|
||||
// Try executes f, catching and returning any panic it might spawn.
|
||||
//
|
||||
// The recovered panic can be propagated with panic(), or handled as a normal error with
|
||||
// (*panics.Recovered).AsError().
|
||||
func Try(f func()) *Recovered {
|
||||
var c Catcher
|
||||
c.Try(f)
|
||||
return c.Recovered()
|
||||
}
|
52
vendor/github.com/sourcegraph/conc/waitgroup.go
generated
vendored
Normal file
52
vendor/github.com/sourcegraph/conc/waitgroup.go
generated
vendored
Normal file
@ -0,0 +1,52 @@
|
||||
package conc
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/sourcegraph/conc/panics"
|
||||
)
|
||||
|
||||
// NewWaitGroup creates a new WaitGroup.
|
||||
func NewWaitGroup() *WaitGroup {
|
||||
return &WaitGroup{}
|
||||
}
|
||||
|
||||
// WaitGroup is the primary building block for scoped concurrency.
|
||||
// Goroutines can be spawned in the WaitGroup with the Go method,
|
||||
// and calling Wait() will ensure that each of those goroutines exits
|
||||
// before continuing. Any panics in a child goroutine will be caught
|
||||
// and propagated to the caller of Wait().
|
||||
//
|
||||
// The zero value of WaitGroup is usable, just like sync.WaitGroup.
|
||||
// Also like sync.WaitGroup, it must not be copied after first use.
|
||||
type WaitGroup struct {
|
||||
wg sync.WaitGroup
|
||||
pc panics.Catcher
|
||||
}
|
||||
|
||||
// Go spawns a new goroutine in the WaitGroup.
|
||||
func (h *WaitGroup) Go(f func()) {
|
||||
h.wg.Add(1)
|
||||
go func() {
|
||||
defer h.wg.Done()
|
||||
h.pc.Try(f)
|
||||
}()
|
||||
}
|
||||
|
||||
// Wait will block until all goroutines spawned with Go exit and will
|
||||
// propagate any panics spawned in a child goroutine.
|
||||
func (h *WaitGroup) Wait() {
|
||||
h.wg.Wait()
|
||||
|
||||
// Propagate a panic if we caught one from a child goroutine.
|
||||
h.pc.Repanic()
|
||||
}
|
||||
|
||||
// WaitAndRecover will block until all goroutines spawned with Go exit and
|
||||
// will return a *panics.Recovered if one of the child goroutines panics.
|
||||
func (h *WaitGroup) WaitAndRecover() *panics.Recovered {
|
||||
h.wg.Wait()
|
||||
|
||||
// Return a recovered panic if we caught one from a child goroutine.
|
||||
return h.pc.Recovered()
|
||||
}
|
Reference in New Issue
Block a user