Compare commits

...

290 Commits

Author SHA1 Message Date
Frank Denis 8dadd61730
Merge pull request #2630 from DNSCrypt/dependabot/github_actions/softprops/action-gh-release-2.0.5
Bump softprops/action-gh-release from 2.0.2 to 2.0.5
2024-05-08 10:31:44 +02:00
dependabot[bot] f7da81cf29
Bump softprops/action-gh-release from 2.0.2 to 2.0.5
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 2.0.2 to 2.0.5.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](d99959edae...69320dbe05)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-08 03:55:52 +00:00
Frank Denis 0efce55895 Try another ODoH relay 2024-05-07 22:44:33 +02:00
Frank Denis 5a1d94d506 Update the test ODoH relay 2024-05-07 22:35:09 +02:00
Frank Denis 271943c158 Update quic-go 2024-05-07 17:39:56 +02:00
Frank Denis 34a1f2ebf5 Update quic-go 2024-04-27 22:33:56 +02:00
Frank Denis f8ce22d9b9 Update deps 2024-04-25 12:45:52 +02:00
Frank Denis 249dba391d Support gzip compression to fetch source files 2024-04-25 12:43:29 +02:00
Frank Denis 987ae216e3 Add fritz.box to the set of undelegated zones 2024-04-21 20:14:15 +02:00
Frank Denis 7fba32651b Make it more visible that DNS64 has been enabled 2024-04-19 18:27:39 +02:00
Frank Denis 6ae388e646 DNS64 plugin: don't return SYNTH data, alter the response directly
Fixes #2619

However, cached responses now appear with the "PASS" status rather
than "CLOAK".
2024-04-19 18:19:16 +02:00
Frank Denis 0af88bc875 Merge branch 'master' of github.com:DNSCrypt/dnscrypt-proxy
* 'master' of github.com:DNSCrypt/dnscrypt-proxy:
  chore: fix some typos in comments
2024-04-15 12:36:30 +02:00
Frank Denis d36edeb612 Update deps 2024-04-15 12:36:21 +02:00
Frank Denis 041a6c7d7f
Merge pull request #2615 from cuibuwei/master
chore: fix some typos in comments
2024-04-13 20:02:49 +02:00
cuibuwei 2c6416d5ae chore: fix some typos in comments
Signed-off-by: cuibuwei <cuibuwei@gmail.com>
2024-04-13 19:56:31 +08:00
Frank Denis 4d1cd67d4d Nits 2024-04-03 16:49:37 +02:00
Frank Denis 363d44919f Properly check for the sticky bit 2024-04-03 16:47:13 +02:00
Frank Denis a88076d06f
Merge pull request #2605 from edmonds/forward-root-subdomain-matches
Forwarding plugin: Support forwarding subdomains of the root domain
2024-03-26 21:32:06 +01:00
Frank Denis 119bc0b660 Update deps 2024-03-26 19:56:06 +01:00
Robert Edmonds 49000cd4f4 Forwarding plugin: Support forwarding subdomains of the root domain
This commit updates the forwarding plugin to support matching subdomains
of the root domain ("."). It looks like the forwarding plugin already
performs subdomain matches against the domains specified in the
forwarding rules files, but matches against the root domain weren't
working because of the way matches are performed by comparing the
normalized presentation format QNAME (which omits the trailing dot for
all QNAMEs except the root domain name).

Without this commit, only queries where the QNAME is exactly "."
would match a forwarding rule for the "." domain, like this (with
`offline_mode = true` and a single forwarding rule for the "." domain):

```
[2024-03-25 21:13:31]	100.100.100.100	.	NS	FORWARD	0ms	127.0.0.1:53
[2024-03-25 21:13:36]	100.100.100.100	com	NS	NOT_READY	0ms	-
```

With this commit I get the expected result:

```
[2024-03-25 21:40:07]	100.100.100.100	.	NS	FORWARD	0ms	127.0.0.1:53
[2024-03-25 21:40:09]	100.100.100.100	com	NS	FORWARD	0ms	127.0.0.1:53
```
2024-03-25 21:30:09 -04:00
Frank Denis ec46e09360
Merge pull request #2597 from DNSCrypt/dependabot/github_actions/softprops/action-gh-release-d99959edae48b5ffffd7b00da66dcdb0a33a52ee
Bump softprops/action-gh-release from 975c1b265e11dd76618af1c374e7981f9a6ff44a to d99959edae48b5ffffd7b00da66dcdb0a33a52ee
2024-03-11 09:23:02 +01:00
dependabot[bot] ea5808e024
Bump softprops/action-gh-release
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 975c1b265e11dd76618af1c374e7981f9a6ff44a to d99959edae48b5ffffd7b00da66dcdb0a33a52ee.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](975c1b265e...d99959edae)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-11 03:24:39 +00:00
Frank Denis 79a1aa8325
Merge pull request #2591 from alisonatwork/fix-build 2024-02-28 10:11:48 +01:00
Alison Winters 4c442f5dbb use go mod version for codeql 2024-02-28 10:07:03 +08:00
Alison Winters f7e13502c0 fix go version format 2024-02-28 09:47:34 +08:00
YX Hao 8d43ce9b56 make expression be more self-explanatory 2024-02-27 22:05:40 +08:00
YX Hao ac5087315c Listen `0.0.0.0` only on IPv4 2024-02-27 19:04:09 +08:00
Frank Denis ad80d81d43
Merge pull request #2587 from DNSCrypt/dependabot/github_actions/softprops/action-gh-release-975c1b265e11dd76618af1c374e7981f9a6ff44a
Bump softprops/action-gh-release from 4634c16e79c963813287e889244c50009e7f0981 to 975c1b265e11dd76618af1c374e7981f9a6ff44a
2024-02-26 08:47:39 +01:00
dependabot[bot] a7fb13ba4e
Bump softprops/action-gh-release
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from 4634c16e79c963813287e889244c50009e7f0981 to 975c1b265e11dd76618af1c374e7981f9a6ff44a.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](4634c16e79...975c1b265e)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-26 03:37:41 +00:00
Frank Denis 006619837f Update deps 2024-02-20 08:01:38 +01:00
Frank Denis 093936f7ab Check for dumb file permissions on startup
There's nothing special about "-service install".

On any system, executables shouldn't be modifiable by other system
users, no matter what the executable is and how it's run.
2024-02-20 02:39:39 +01:00
Frank Denis 7462961980 Warn if the executable of the service being installed could be overwritten by other system users
Fixes #2579

Until this is handled by `kardianos/service`
2024-02-20 02:23:49 +01:00
Frank Denis 0b559bb54f Warn if the main config file could be written by other system users 2024-02-20 02:11:03 +01:00
Frank Denis 658835b4ff
Merge pull request #2573 from DNSCrypt/dependabot/github_actions/softprops/action-gh-release-4634c16e79c963813287e889244c50009e7f0981
Bump softprops/action-gh-release from c9b46fe7aad9f02afd89b12450b780f52dacfb2d to 4634c16e79c963813287e889244c50009e7f0981
2024-02-06 13:23:03 +01:00
Frank Denis 90c3017793
Merge pull request #2544 from DNSCrypt/dependabot/github_actions/actions/setup-go-5
Bump actions/setup-go from 4 to 5
2024-02-06 13:22:24 +01:00
dependabot[bot] e371138b86
Bump softprops/action-gh-release
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from c9b46fe7aad9f02afd89b12450b780f52dacfb2d to 4634c16e79c963813287e889244c50009e7f0981.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](c9b46fe7aa...4634c16e79)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-06 03:22:12 +00:00
Frank Denis bcbf2db4ff
Merge pull request #2568 from lifenjoiner/ci 2024-01-19 14:30:34 +01:00
YX Hao 64fa90839c CI tests the unregistered domain for solid NS answer
And formats.
2024-01-19 19:11:03 +08:00
Frank Denis f2484f5bd5 Cache plugin: replace ARC cache with SIEVE 2024-01-19 00:05:33 +01:00
Frank Denis 63f8d9b30d Update deps 2024-01-18 23:47:00 +01:00
Xiaotong Liu 49e3570c2c
Support server refresh concurrency (#2537)
* simultaneously refresh all servers

* Add `cert_refresh_concurrency`

---------

Co-authored-by: YX Hao <lifenjoiner@163.com>
2023-12-18 19:25:54 +08:00
lifenjoiner 3be53642fe
Merge pull request #2549 from lifenjoiner/wg
Optimize CaptivePortalHandler for clean code
2023-12-17 18:59:05 +08:00
YX Hao 13e7077200 Optimize CaptivePortalHandler for clean code 2023-12-14 19:23:42 +08:00
Frank Denis f5912d7ca9
Merge pull request #2548 from DNSCrypt/dependabot/github_actions/github/codeql-action-3
Bump github/codeql-action from 2 to 3
2023-12-14 07:47:05 +01:00
dependabot[bot] 0196d7d2ab
Bump github/codeql-action from 2 to 3
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2 to 3.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-14 03:22:09 +00:00
Frank Denis 898ded9c52 Merge branch 'master' of github.com:DNSCrypt/dnscrypt-proxy
* 'master' of github.com:DNSCrypt/dnscrypt-proxy:
  add timeout for udp and tcp dialer
  Use blocking channel instead of looping sleep for less CPU usage
2023-12-13 08:35:55 +01:00
Frank Denis e782207911 Update deps 2023-12-13 08:35:41 +01:00
Frank Denis 0f1f635ec1
Merge pull request #2545 from keatonLiu/dial-timeout
add timeout for udp and tcp dialer
2023-12-11 14:45:20 +01:00
keatonLiu 956a14ee21 add timeout for udp and tcp dialer 2023-12-10 23:56:13 +08:00
dependabot[bot] 22731786a2
Bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-07 03:49:26 +00:00
Frank Denis 42b6ae9052
Merge pull request #2541 from lifenjoiner/chan
Use blocking channel instead of looping sleep for less CPU usage
2023-12-03 17:24:10 +01:00
YX Hao 0d5e52bb16 Use blocking channel instead of looping sleep for less CPU usage 2023-12-03 23:00:10 +08:00
Frank Denis 0ba728b6ce Update deps 2023-11-15 15:51:48 -08:00
Frank Denis cb80bf33e8 shellcheck 2023-11-09 22:47:41 +01:00
Frank Denis 88207560a7 shellcheck fixes 2023-11-09 22:45:11 +01:00
Jeffrey Damick 4a361dbb05 Added support to package dnscrypt for windows into an msi 2023-11-09 13:14:29 -08:00
Frank Denis b37a5c991a Update deps 2023-10-12 15:53:53 +02:00
Frank Denis 0232870870 -list: only copy nofilter flag for ODoH relays 2023-09-23 22:52:43 +02:00
Frank Denis 1a9bf8a286 Omit DNSSEC flag for relays 2023-09-23 18:46:11 +02:00
Frank Denis 7fb58720fb Add -include-relays option to include relays in -list and -list-all 2023-09-23 18:37:52 +02:00
Frank Denis f85b3e81ec Update deps 2023-09-19 20:59:57 +02:00
Frank Denis 79779cf744 Merge branch 'master' of github.com:DNSCrypt/dnscrypt-proxy
* 'master' of github.com:DNSCrypt/dnscrypt-proxy:
  Bump actions/checkout from 3 to 4
2023-09-05 22:37:54 +02:00
Frank Denis 8bea679e7b Unofficially support DoH/ODoH over HTTP 2023-09-05 22:37:11 +02:00
Frank Denis 96f21f1bff
Merge pull request #2479 from DNSCrypt/dependabot/github_actions/actions/checkout-4
Bump actions/checkout from 3 to 4
2023-09-05 08:13:08 +02:00
dependabot[bot] 21097686c1
Bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-05 03:29:06 +00:00
Frank Denis 87571d4a7f Add an IPv6 forwarding example
Fixes #2474
2023-08-30 21:32:22 +02:00
Frank Denis f531c8fffb plugin_forward: parse more conventions for IPv6 addresses 2023-08-30 21:29:09 +02:00
Frank Denis 5ae83c1592 Remove minor go version for codeql 2023-08-26 18:28:55 +02:00
Frank Denis c86e9a90cc Update quic-go 2023-08-25 16:15:45 +02:00
Frank Denis d48c811ea9 Don't use absolute paths in the example file 2023-08-17 14:44:47 +02:00
Frank Denis f2b1edcec2 Add dnscry.pt servers 2023-08-17 14:43:33 +02:00
Frank Denis 1b65fe62b0 Bump 2023-08-11 18:56:31 +02:00
Frank Denis 194752e829 Update ChangeLog 2023-08-11 18:47:44 +02:00
Frank Denis 808f2dfa0e Update deps 2023-08-11 17:06:18 +02:00
Frank Denis 7dd79d5f96 Add a little bit more delay when spinning
But we really shouldn't do it that way, if only because there's a race
between the last write to the channel and the close() call
2023-08-11 15:24:14 +02:00
Frank Denis 5088d8fae1 Use the latest version of Go in CI 2023-08-11 14:59:34 +02:00
Frank Denis aff09648bb Add support for extended error codes 2023-08-11 14:59:10 +02:00
Frank Denis 7bca9a6c0a
Merge pull request #2457 from Neo2308/feature/master/check-latest-go
Configured Github CI to always use latest go version
2023-08-11 14:07:59 +02:00
Frank Denis 98d0938815 Make RefreshDelay match the documentation 2023-08-11 14:06:03 +02:00
Frank Denis 50780421a8 Remove ipv6.download.dnscrypt.info
IPv6 address has been added to download.dnscrypt.info
2023-08-11 14:04:20 +02:00
RadhaKrishna be7d5d1277 Configured Github CI to always use latest go
version

* Check if there is a newer version of go before using the cached version.
2023-08-11 17:15:09 +05:30
Frank Denis c3dd761b81 Make error more explicit 2023-08-11 12:07:13 +02:00
Frank Denis d8aec47a72 Revert "Make RefreshDelay match the documentation"
This reverts commit cfd6ced134.
2023-08-11 11:48:30 +02:00
Frank Denis cfd6ced134 Make RefreshDelay match the documentation 2023-08-11 11:42:12 +02:00
Frank Denis bdf27330c9 Make fetchWithCache() more readable 2023-08-11 11:24:54 +02:00
Frank Denis a108d048d8 A useless Chtimes() call is still required for the tests :/ 2023-08-11 11:16:44 +02:00
Frank Denis afcfd566c9 Make updateCache() more readable 2023-08-11 11:11:16 +02:00
Frank Denis ce55d1c5bb Get rid of named return parameters 2023-08-11 11:01:55 +02:00
Frank Denis 2481fbebd7 Revert b898e07066 2023-08-11 01:39:15 +02:00
Frank Denis 32aad7bb34 Format fix 2023-08-11 01:15:34 +02:00
Frank Denis 7033f242c0 Restore the cache update code from version 2.1.4 for now 2023-08-11 00:51:34 +02:00
Frank Denis 2675d73b13 Port changes from #2334
I'm not sure I follow, but I trust @lifenjoiner

Fixes #2334
2023-08-11 00:17:46 +02:00
Frank Denis 5085a22903 Update quic-go again 2023-08-10 23:50:43 +02:00
Frank Denis 7cc5a051c7 Update golang-lru 2023-08-08 14:21:12 +02:00
Frank Denis 894d20191f Update deps 2023-08-07 17:32:25 +02:00
Frank Denis 0a98be94a7 Update quic-go to fix two regressions 2023-08-01 00:06:22 +02:00
Frank Denis 1792c06bc7
Merge pull request #2442 from Expertcoderz/patch-1
Add note regarding block_unqualified setting
2023-07-25 14:55:42 +02:00
Expertcoderz 63e414021b
Add note regarding block_unqualified 2023-07-25 12:36:07 +00:00
Frank Denis d659a801c2 Big update to Update quic-go 2023-07-22 00:41:27 +02:00
Frank Denis a4eda39563
Merge pull request #2438 from Expertcoderz/patch-1
Add .mail & .home.arpa undelegated names
2023-07-15 20:07:05 +02:00
Expertcoderz 4114f032c3
Add .mail & .home.arpa undelegated names
Both names have been recognized for internal use in private networks.
2023-07-15 13:12:40 +00:00
Frank Denis a352a3035c Update deps 2023-07-08 14:56:13 +02:00
Frank Denis 60684f8ee4
Merge pull request #2429 from lifenjoiner/quic-go
Upgrade quic-go to v0.36.1
2023-07-08 14:54:39 +02:00
YX Hao be369a1f7a Shorten a line 2023-07-06 21:01:41 +08:00
YX Hao 89ccc59f0e Upgrade quic-go to v0.36.1
quic-go has made breaking changes since v0.35.0, includes implementing
`CloseIdleConnections`.
Now, the local listener UDPConn are reused, and don't pile up. But,
1 instance (IPv4/IPv6) persists for each connected server.
2023-07-05 21:19:54 +08:00
Frank Denis 16b2c84147 Tone down some errors 2023-06-24 22:38:59 +02:00
Carlo Teubner b46775ae0c
Add some missing error checks (#2420)
I found these with the 'errcheck' tool (via 'golangci-lint').

I aimed to apply reasonable judgement when deciding which errors
actually need handling, and how to handle them.
2023-06-24 22:23:12 +02:00
Frank Denis cef4b041d7 Don't call "bin" what is actually text 2023-06-24 22:11:47 +02:00
Carlo Teubner d8b1f4e7cd
Fix miscellaneous style issues (#2421)
Found by running: golangci-lint run --enable-all

I have only addressed the reported issues that seemed relevant to me.
2023-06-24 21:56:03 +02:00
Frank Denis 23a6cd7504 Revert "Update quic-go"
This reverts commit f9f68cf0a3.

quic-go >= 1.0.35 panics

We may not be using the new API correctly.
2023-06-22 11:06:37 +02:00
Frank Denis f42b7dad17 Update deps 2023-06-22 10:36:33 +02:00
Frank Denis 4f3ce0cbae Update deps 2023-06-18 04:19:52 +02:00
Frank Denis 0f1e3b4ba8 error check all the rand.Read() calls 2023-06-06 09:16:44 +02:00
Frank Denis 62ef5c9d02 Update quic-go to fix a regression 2023-06-01 11:32:10 +02:00
Frank Denis f9f68cf0a3 Update quic-go 2023-05-30 18:17:27 +02:00
Frank Denis 0c26d1637a Add suport for TLS key logging 2023-05-24 09:21:49 +02:00
Frank Denis 9f86ffdd1e Update deps 2023-05-13 11:39:11 +02:00
lifenjoiner 9b2c674744
Base on clientProto value explicitly to dereference clientAddr (#2393)
There are variants local_doh and trampoline for internal flow.
2023-05-13 11:22:52 +02:00
Frank Denis d381af5510 Update deps 2023-05-01 16:05:02 +02:00
Frank Denis c66023c7d7 Clarify that TLS cipher suites are for TLS 1.2
Fixes #2377
2023-04-18 13:15:59 -06:00
Frank Denis 5b8e7d4114 Use the same command as on the wiki to create a local DoH cert 2023-04-14 23:08:10 +02:00
KOLANICH f4007f709d
Add DOH certificate generation commands into the example config. (#2367) 2023-04-14 21:34:29 +02:00
lifenjoiner dd1c066724
CI: Don't downloading already vendored dependencies (#2370) 2023-04-14 21:21:22 +02:00
lifenjoiner 5d551e54ce
CI: Fix actions/setup-go@v4 warning (#2371)
Warning: Restore cache failed: Dependencies file is not found ...
https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs
2023-04-14 21:20:37 +02:00
Thad Guidry fbc7817366
fix grammar in example file (#2372) 2023-04-14 21:19:55 +02:00
Frank Denis 9b61b73852 Update deps 2023-04-07 16:42:57 +02:00
Frank Denis af6340df09 Comment 2023-04-07 16:20:26 +02:00
Frank Denis 9c73ab3070 Simplify updateCache() 2023-04-07 16:18:50 +02:00
Frank Denis ea3625bcfd Try to simplify updateCache() to understand what it does 2023-04-07 16:09:51 +02:00
Frank Denis f567f57150 up 2023-04-07 15:58:34 +02:00
Frank Denis c03f1a31eb Go named return parameters are utterly confusing 2023-04-07 15:37:09 +02:00
Frank Denis c3c51bb435 Partially re-merge 92ed5b95e0 2023-04-07 15:21:00 +02:00
Frank Denis 0f30b3b028 Revert "Try to understand how cache files are updated"
This reverts commit 92ed5b95e0.
2023-04-07 15:16:15 +02:00
lifenjoiner 6d826afac5
Reduce a local variable (#2363) 2023-04-06 14:22:21 +02:00
Frank Denis b341c21dbd Merge branch 'master' of github.com:DNSCrypt/dnscrypt-proxy
* 'master' of github.com:DNSCrypt/dnscrypt-proxy:
  Bump softprops/action-gh-release (#2357)
  Bump actions/setup-go from 3 to 4 (#2354)
  Update deps
  Format
  Better description for ignore_system_dns
  Move booleans together for alignment, avoid unneeded format string
  Try dnscrypt-proxy to resolve configured hosts when ignore_system_dns (#2204)
  Downgrade to TLS 1.2 if an 1.3-incompatible cipher suite is set
2023-04-06 14:21:15 +02:00
Frank Denis 92ed5b95e0 Try to understand how cache files are updated
Having to keep a copy of all the files in memory is weird.

We shouldn't have to do that.
2023-04-06 14:19:25 +02:00
Frank Denis b898e07066 A source URL may have an IP address that doesn't exist any more 2023-04-06 14:18:38 +02:00
dependabot[bot] 92063aa76d
Bump softprops/action-gh-release (#2357)
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from d4e8205d7e959a9107da6396278b2f1f07af0f9b to c9b46fe7aad9f02afd89b12450b780f52dacfb2d.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](d4e8205d7e...c9b46fe7aa)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-05 21:43:53 +02:00
dependabot[bot] 4be5264529
Bump actions/setup-go from 3 to 4 (#2354)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-05 21:43:43 +02:00
Frank Denis 13d78c042b Update deps 2023-04-05 21:36:29 +02:00
Frank Denis 36c17eb59a Format 2023-04-05 21:33:21 +02:00
Frank Denis b9f8f78c6e Better description for ignore_system_dns 2023-04-05 21:31:07 +02:00
Frank Denis fc16e3c31c Move booleans together for alignment, avoid unneeded format string 2023-04-05 21:20:42 +02:00
lifenjoiner b3318a94b7
Try dnscrypt-proxy to resolve configured hosts when ignore_system_dns (#2204) 2023-04-05 21:17:51 +02:00
Frank Denis ca0f353087 Downgrade to TLS 1.2 if an 1.3-incompatible cipher suite is set
Fixes #2359
2023-04-05 20:53:27 +02:00
Frank Denis cf7d60a704 Update miekg/dns 2023-03-29 22:00:24 +09:00
Frank Denis a47f7fe750 Update deps 2023-03-23 12:54:45 +01:00
Frank Denis beb002335f Add an example forwarding rule with Tor 2023-03-23 12:53:08 +01:00
Frank Denis 15c87a68a1 Update quic-go 2023-02-25 23:45:38 +01:00
Frank Denis 47e6a56b16 Logger: pre-create log files before lumberjack does
Clunky workaround for https://github.com/natefinch/lumberjack/issues/164
2023-02-25 23:42:38 +01:00
Frank Denis 03c6f92a5f Use crypto_rand() everywhere 2023-02-24 16:20:39 +01:00
lifenjoiner 24a301b1af
Fix DoH3 connections piling up (#2337)
DoH3 creates a new connection for each request without closing.

* `Conn` should be self maintained if it's created by customized `Dial` of `http3.RoundTripper`.
https://pkg.go.dev/github.com/quic-go/quic-go#DialAddrEarlyContext

* http3 doesn't have a `CloseIdleConnections`.
https://pkg.go.dev/net/http#Client.CloseIdleConnections
2023-02-24 16:14:43 +01:00
lifenjoiner a8d1c2fd24
`dlog.SetLogLevel(dlog.SeverityDebug)` if `go test -v` (#2331) 2023-02-21 16:24:11 +01:00
Frank Denis 96ffb21228 Update deps 2023-02-15 18:42:36 +01:00
Frank Denis acc25fcefb Format with gofumpt 2023-02-11 14:27:12 +01:00
Frank Denis 07b4ec33c5 Minor update of x/net and x/crypto 2023-02-09 17:23:48 +01:00
Frank Denis 9f3ef735f2 Bump 2023-02-07 11:03:09 +01:00
Frank Denis d568e43937 Update deps 2023-02-07 11:03:09 +01:00
Frank Denis 68f3ab249c Unbreak cloaking plugin
In version 2.1.3, when the cloaking pluging was enabled, a blocked
response was returned for records that were not A/AAAA/PTR, even
with names that were not in the cloaked list.
2023-02-07 11:03:05 +01:00
dependabot[bot] 2edfdc48b8
Bump roots/issue-closer from 1.1 to 1.2 (#2307)
Bumps [roots/issue-closer](https://github.com/roots/issue-closer) from 1.1 to 1.2.
- [Release notes](https://github.com/roots/issue-closer/releases)
- [Commits](https://github.com/roots/issue-closer/compare/v1.1...v1.2)

---
updated-dependencies:
- dependency-name: roots/issue-closer
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-06 10:46:14 +01:00
Frank Denis 3f31c4d3e2 Update issue-close-message 2023-02-04 22:15:18 +01:00
Frank Denis 84184bbad8 Merge branch 'master' of github.com:DNSCrypt/dnscrypt-proxy
* 'master' of github.com:DNSCrypt/dnscrypt-proxy:
  GitHub Actions: Deprecating save-state and set-output commands (#2295)
  Nits (#2293)
  Make CodeQL happy (#2294)
2023-02-04 22:11:14 +01:00
Frank Denis f630094e8d Add autocloser 2023-02-04 22:11:05 +01:00
lifenjoiner 3517dec376
GitHub Actions: Deprecating save-state and set-output commands (#2295) 2023-02-03 16:24:32 +01:00
lifenjoiner 683aad75da
Nits (#2293) 2023-02-03 16:23:57 +01:00
lifenjoiner e1c7ea1770
Make CodeQL happy (#2294) 2023-02-03 16:22:32 +01:00
Frank Denis 470460f069 GO386=387 -> GO386=softfloat
Fixes #2296
2023-02-03 16:21:51 +01:00
Frank Denis 8694753866 Bump action-gh-release up 2023-02-02 20:28:37 +01:00
Frank Denis b4b58366cc Update ChangeLog 2023-02-02 20:27:16 +01:00
Frank Denis f7df72eafa Bump to 2.1.3 2023-02-02 20:10:54 +01:00
Frank Denis fb15535282 Format 2023-02-02 20:10:49 +01:00
Frank Denis c32aad3dfd Update GitHub status badge 2023-02-02 20:09:09 +01:00
Frank Denis 9e208e6090 Cloak plugin: reject uncloaked records, except NS & SOA
Fixes #2220
2023-02-02 19:59:47 +01:00
Frank Denis 5f88a9146c Get rid of the latest ioutil bits 2023-02-02 19:44:51 +01:00
Frank Denis 3f23ff5c08 Mostly get rid of ioutil 2023-02-02 19:38:24 +01:00
Frank Denis 33c8027e0a Use a custom dialer for HTTP/3 2023-02-02 19:32:17 +01:00
Frank Denis 11e824bd13 Update go-acl 2023-02-02 12:44:12 +01:00
Deltadroid c3fd855831
Update quic-go dependency to support go 1.20 (#2292) 2023-02-02 12:42:11 +01:00
Frank Denis 5438eed2f4 Update go/x/crypto 2023-01-08 21:37:50 +01:00
Frank Denis a868e2b306 2023 2023-01-05 14:06:00 +01:00
Frank Denis f21eca0764 Add time.google.com IP addresses to the captive portals example 2022-12-30 13:50:31 +01:00
Frank Denis c883949a97 Document cert_ignore_timestamp 2022-12-29 22:39:12 +01:00
Frank Denis e13b4842e8 Updater deps 2022-12-17 14:50:10 +01:00
dependabot[bot] 5705b60936
Bump softprops/action-gh-release (#2247)
Bumps [softprops/action-gh-release](https://github.com/softprops/action-gh-release) from cd28b0f5ee8571b76cfdaa62a30d51d752317477 to 1. This release includes the previously tagged commit.
- [Release notes](https://github.com/softprops/action-gh-release/releases)
- [Changelog](https://github.com/softprops/action-gh-release/blob/master/CHANGELOG.md)
- [Commits](cd28b0f5ee...de2c0eb89a)

---
updated-dependencies:
- dependency-name: softprops/action-gh-release
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-22 11:42:01 +01:00
Frank Denis 5f4dfc5d6e Update quic-go 2022-11-19 23:01:21 +01:00
Frank Denis a89d96144f Update deps 2022-11-16 21:38:52 +01:00
Frank Denis 4aa415de6e Update deps 2022-10-24 10:20:25 +02:00
Frank Denis f4389559ef Pin softprops/action-gh-release 2022-10-24 10:10:58 +02:00
Frank Denis 361455cd58 ServiceManagerReadyNotify is not just for systemd 2022-10-20 15:33:43 +02:00
cobratbq 77059ce450
systemd: report Ready earlier as dnscrypt-proxy can itself manage retries for updates/refreshes (#2225) 2022-10-20 15:32:26 +02:00
Frank Denis 09a6918226 Use os.Geteuid()
Fixes #2224
2022-10-18 14:56:39 +02:00
Frank Denis c748630691 Update deps 2022-10-15 10:37:07 +02:00
Frank Denis 94cba8cf78 Update quic-go 2022-09-26 15:12:20 +02:00
Maurizio Pasquinelli ca253923eb
Rename macOS binary (#2191) 2022-08-31 12:05:35 +02:00
Frank Denis 08d44241b9 Update deps 2022-08-30 20:45:06 +02:00
lifenjoiner 4881186dcf
Optimize adopted relay name to show (#2188)
* Optimize adopted relay name to show

DNSCrypt relay requires ServerAddrStr;
ODoH relay requires ProviderName, port 443 can be either present or not;
raw stamp can be both.

Displaying specified stamp makes it easier to debug.

* Fix pasto
2022-08-25 19:28:04 +02:00
Frank Denis 41f192a907 Mention HTTP/3 2022-08-24 17:35:34 +02:00
Frank Denis 937c1e63e2 Revert "xtransport layer to netip and immediate dependencies (#2159)"
This reverts commit baee50f1dc.
2022-08-10 22:24:36 +02:00
Frank Denis e124623ffc Mention HTTP/3 support 2022-08-03 15:31:57 +02:00
lifenjoiner 55fc4c207b
Log to console when in command mode (#2167)
Quick results.
Avoid overwriting the log file in use, by the same config most of the time.
2022-08-03 14:52:08 +02:00
Ian Bashford baee50f1dc
xtransport layer to netip and immediate dependencies (#2159) 2022-08-01 22:31:12 +02:00
Frank Denis 6e1bc06477 Update quic-go 2022-08-01 17:59:01 +02:00
Frank Denis 8523a92437 Update example to include http3 configuration 2022-07-24 16:16:21 +02:00
Frank Denis 442f2e15cb Make HTTP/3 support configurable 2022-07-24 16:13:14 +02:00
Frank Denis 0c88e2a1a0 ChangeLog update 2022-07-24 15:43:03 +02:00
Frank Denis 35063a1eec Quit dep update before the release 2022-07-24 15:06:14 +02:00
Frank Denis 07266e4d4f Update toml dep 2022-07-21 19:02:28 +02:00
Frank Denis 5977de660b Add suport for DoH over HTTP/3 2022-07-21 18:50:10 +02:00
lifenjoiner 91388b148c
Optimize stopping CaptivePortalHandler - 2 (#2155)
1. Fix early return that triggers port rebinding error by 8e46f447.
2. Reduce waiting time while there are multiple listen_addresses.
2022-07-19 12:35:52 +02:00
lifenjoiner 8e46f44799
Optimize stopping CaptivePortalHandler (#2151)
* Optimize stopping CaptivePortalHandler

* Still use unbuffered channel as we close it instead of sending a signal
2022-07-14 21:53:13 +02:00
Frank Denis 3d641b758a Bump 2022-07-13 18:49:50 +02:00
Frank Denis 49ea894ce8 Update dependencies 2022-07-13 18:48:17 +02:00
lifenjoiner 568f54fabb
Reduce comparisons (#2148) 2022-07-08 14:11:51 +02:00
pc-v2 dc2fff05be
Adding flush dns to clean up previous dns cache (#2141) 2022-07-03 14:13:12 +02:00
Frank Denis 38e87f9a7b Add a constant for the maximum number of attempts 2022-06-28 18:30:15 +02:00
lifenjoiner 0e2bb13254
Fix goroutines memory leak by unbuffered channel blocking (#2136)
* Use buffered channel to avoid goroutine hanging on

A send on an unbuffered channel can proceed if a receiver is ready.

* Balance captivePortalHandler.cancelChannels for Stop
2022-06-28 18:28:57 +02:00
Frank Denis 59ce17e0ab No need to warn if this is then going to be an error 2022-06-24 15:41:05 +02:00
Frank Denis ee5c9d67a4 Update miekg/dns 2022-06-24 15:18:33 +02:00
Frank Denis 8c43118b03 Stop mentioning "SERVFAIL" in info messages 2022-06-19 20:38:49 +02:00
ignoramous 7177a0ec74
dns64: preserve cnames in translated response (#2129)
* dns64: preserve cnames in translated response

* dns64: rename synthAAAAs to synth64
2022-06-16 00:53:50 +02:00
lifenjoiner 72a602577a
Raise error for invalid relay (#2128)
* Raise error for invalid relay

* Keep error messages the same

* Distinguish this from validation failed
2022-06-15 13:16:06 +02:00
lifenjoiner 0a0b69d93d
RUnlock for early exit (#2127) 2022-06-14 14:25:52 +02:00
lifenjoiner 6916c047e1
Use registeredServers slice copy during ServerInfo refreshing period (#2125)
goroutines:
proxy.updateRegisteredServers() versus proxy.serversInfo.refresh(proxy)
2022-06-13 17:51:33 +02:00
ignoramous 8d737a69f5
PluginDNS64: Use read and write mutexes as approp (#2124) 2022-06-12 11:27:55 +02:00
Frank Denis 866954fbad PreferServerCipherSuites has been deprecated 2022-06-11 19:26:26 +02:00
Frank Denis e477d0e126 We may not have a configured IP address 2022-06-11 19:23:58 +02:00
Frank Denis e24fdd2235 Nits 2022-06-07 21:33:50 +02:00
livingentity 74fb5dabb9
fix negative rtt / shorten lines (#2118)
* fix negative rtt / shorten lines

* Update serversInfo.go
2022-05-18 17:57:57 +02:00
Frank Denis 1afd573b0d Add builds for linux/riscv64 2022-05-18 13:31:46 +02:00
Frank Denis c367a82ac0 Bump miekg/dns 2022-05-11 19:38:45 +02:00
dependabot[bot] 9c8c327703
Bump github/codeql-action from 1 to 2 (#2102)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 1 to 2.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-26 08:20:41 -07:00
livingentity 207d44323d
Update serversInfo.go (#2092) 2022-04-16 21:26:38 +02:00
Frank Denis 3eac156789 Update burntSushi/toml 2022-04-05 14:05:53 +02:00
Frank Denis 5fca7ea49e Back to VividCortex/ewma 2022-04-05 14:04:26 +02:00
Frank Denis 77dc3b1e85 Update miekg/dns 2022-04-03 23:08:29 +02:00
Frank Denis 66f019d886 Revert "regression: fix ewma warmup again (#2079)"
This reverts commit f67e9cab32.
2022-04-03 23:01:03 +02:00
livingentity f67e9cab32
regression: fix ewma warmup again (#2079)
* Update estimators.go

* Update go.mod

* Update modules.txt

* Update go.sum

* Update serversInfo.go

* Update estimators.go

* Update serversInfo.go
2022-04-02 17:41:36 +02:00
Frank Denis 5d023d2a7c Revert "New feature: sleep mode"
This reverts commit e931b234b7.
2022-04-02 09:33:49 +02:00
Frank Denis e931b234b7 New feature: sleep mode 2022-03-31 20:51:34 +02:00
quindecim ed2c880648
Add sources in [domains-blocklist.conf] (#2039)
* Remove duplicate in [domains-blocklist.conf]

__NextDNS CNAME cloaking list__ is already contained in [your big source](https://github.com/notracking/hosts-blocklists/blob/master/SOURCES.md?plain=1#L47). No reasons to merge it two times.

* Remove more duplicates in [domains-blocklist.conf]

* Revert previous commit

* Add anudeepND and lightswitch05 blocklists
2022-03-29 14:46:46 +02:00
Frank Denis c467e20311 Update deps 2022-03-28 12:35:01 +02:00
Frank Denis df3fb0c9f8 Keep lines short
$ golines -w -m 120 --shorten-comments .
2022-03-23 17:48:48 +01:00
Frank Denis c0435772d4 -resolve: report ECS support
Note that we can't randomize the source network, as Google and
possible others refuse networks that don't get BGP announcements.
2022-03-14 17:04:54 +01:00
Frank Denis 49c17f8e98 -resolve: use TXT records to get resolver information 2022-03-14 16:11:10 +01:00
Frank Denis 0465cd35ef miekg/dns update 2022-03-14 16:02:51 +01:00
dependabot[bot] 08420917a5
Bump actions/setup-go from 2.2.0 to 3 (#2054)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2.2.0 to 3.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.2.0...v3)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-07 17:01:40 +01:00
livingentity 87d9653ec2
Remove unused functions (#2057)
They aren't used anywhere.
2022-03-07 17:01:18 +01:00
BigDargon d30c44a6a8
Change bootstrap resolver Quad9 (with ECS) (#2056) 2022-03-02 13:18:20 +01:00
dependabot[bot] 911108149b
Bump actions/checkout from 2 to 3 (#2055)
Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-03-02 10:15:29 +01:00
dependabot[bot] a8aa4cb8e6
Bump actions/setup-go from 2.1.5 to 2.2.0 (#2035)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2.1.5 to 2.2.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.5...v2.2.0)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-21 14:17:33 +01:00
Frank Denis ca076ce133 Size estimator: provide the slide size 2022-02-21 14:16:13 +01:00
Frank Denis 034d3bd424 Switch to lifenjoiner's ewma variant 2022-02-21 09:14:24 +01:00
Frank Denis c08852feb1 Update deps; sync vendor 2022-02-21 09:00:21 +01:00
Frank Denis 9373cc7162 Use SimpleEWMA for the question size estimator 2022-02-20 23:40:32 +01:00
Frank Denis cb140673fa Set the number of warmup samples to 1 for the RTT estimator 2022-02-20 23:38:42 +01:00
Frank Denis 7956ba5b10 Switch to an ewma fork that allows setting the warmup samples # 2022-02-20 23:38:06 +01:00
livingentity 9ec8a35468
restore old logic/constants (#2045)
* fix indices

* Update serversInfo.go

For safety go back to former logic, just generalized for lbStrategy, until someone comes up with an actual improvement.

* restore old logic/constants
2022-02-19 17:55:36 +01:00
livingentity ac6abfb985
LBStrategy-aware estimator (#2043)
* fix estimator

* LBStrategy-aware estimator

* typo

* cosmetics
2022-02-15 20:17:48 +01:00
quindecim a20d1685b2
Another minor cosmetic fix to [example-dnscrypt-proxy.toml] (#2036) 2022-02-10 15:27:53 +01:00
livingentity 62092726ec
Minor cosmetic toml changes (#2034)
* Minor cosmetic toml changes

* Minor cosmetic toml changes
2022-02-10 08:49:04 +01:00
Frank Denis f38a5463b0 Indent comments 2022-02-09 12:57:02 +01:00
quindecim 7a54406415
Use the same format logic throughout the document (#2029)
* Use the same spacing logic throughout the document

* Fix previous commit

* Fix previous commit, again

* Use the same logic in comments too
2022-02-09 12:49:22 +01:00
quindecim bce0405c0a
Add more sources in [domains-blocklist.conf] file (#2031)
[OISD nsfw] in [Block pornography] section

[Developer Dan's Hosts: Dating Services] in [Block dating websites] section
[Developer Dan's Hosts: Facebook] in [Block social media sites] section
2022-02-09 12:47:39 +01:00
quindecim 29a3442306
[FIX] Start to use wildcards urls for OISD lists (#2027) 2022-02-08 13:31:32 +01:00
Frank Denis 8ed98cacae Bump miekg/dns up 2022-02-08 09:26:55 +01:00
Peter Dave Hello ef1c70e87d
Remove README.md "Contribute" link as the target doesn't exist (#2015) 2022-02-03 01:30:10 +01:00
Frank Denis 4c67e790f6 -list command: print ODoH targets addresses 2022-02-01 08:19:46 +01:00
Frank Denis 4eeed5816f Fix funky indentation for CloakedPTR 2022-02-01 08:18:45 +01:00
Frank Denis c10e6e0635 Local DoH: add support for request using the GET method
Fixes #2012
2022-01-31 14:56:46 +01:00
Frank Denis e6089449b6 Update domains-blocklist.conf examples:
- Change KADhosts file to the base one
- Remove CHEF-KOCH and GameIndustry lists that don't exist any more

Reported by @remyabel2 - Thanks!
2022-01-30 21:20:19 +01:00
mibere 706c1ab286
Download mirror dnscrypt.net removed (#2003) 2022-01-24 01:36:30 +01:00
Frank Denis f7e3381650 generate-domains-blocklist: parse names prefixed with `*.` 2022-01-23 00:55:26 +01:00
cobratbq 7a8bd35009
systemd: use constants and update status on ready (#1993)
Systemd-notify signaling indicates the status of dnscrypt-proxy when
starting as 'Type=notify' systemd service. However, the status is not
updated when initialization completes, instead it always shows
"Starting". Now fixed.
2022-01-19 20:30:15 +01:00
Frank Denis 06733f57ed If a relay has multiple names, print the one matching the protocol
Fixes #1992
2022-01-17 19:43:12 +01:00
Frank Denis 4fd26029c7 Update BurntSushi/toml 2022-01-13 09:22:22 +01:00
Frank Denis 351bced7c5 New kardianos/service with support for OpenRC 2022-01-11 16:05:08 +01:00
Frank Denis 916e84e798 2022 2021-12-31 23:56:13 +01:00
ValdikSS 53f3a0e63d
Fix broken odoh link in readme (#1976)
* Fix broken odoh link in readme
2021-12-27 12:30:41 +01:00
Frank Denis 4e9f0382ee miekg/dns update 2021-12-24 09:16:59 +01:00
dependabot[bot] 9121f4f359
Bump actions/setup-go from 2.1.4 to 2.1.5 (#1968)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 2.1.4 to 2.1.5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v2.1.4...v2.1.5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-12-22 17:59:04 +01:00
Frank Denis e73459f558 Update deps 2021-12-22 14:00:37 +01:00
Frank Denis fbfc2d57a7 omit comparison to bool constant, can be simplified to !cloakedName.isIP
Reported by GitHub's code scanning
2021-12-16 10:43:40 +01:00
Frank Denis b9d6b22ce1 Align 2021-12-13 14:01:13 +01:00
Ian Bashford 1b6caba307
allow ptr queries for cloaked domains (#1958)
* allow ptr queries for cloaked domains

* multi ips per PTR returned + cleanup

* some string tidy up

* enable config file switch

* add cloaked ptr test

* enable cloak ptrs in test scenario

* fix reverse ipv6 ptr lookup

* added ipv6 cloaked ptr test
2021-12-13 14:00:13 +01:00
CNMan 27e93a53cf
minor typo fix (#1951) 2021-11-30 18:26:34 +01:00
Frank Denis 9e7221c31c Update deps 2021-11-05 13:29:16 +01:00
Frank Denis f6f63743ce Update go-minisign 2021-10-28 19:54:01 +02:00
Frank Denis 561e849889 Add a forwarding example for local reverse entries 2021-10-17 15:53:54 +02:00
a1346054 766e149699
Fix typo and alignment in example-dnscrypt-proxy.toml (#1915) 2021-10-10 19:19:45 +02:00
Frank Denis e2ada45598 Update deps 2021-10-09 13:35:18 +02:00
3015 changed files with 147036 additions and 361528 deletions

View File

@ -23,7 +23,7 @@ ln ../windows/* win64/
zip -9 -r dnscrypt-proxy-win64-${PACKAGE_VERSION:-dev}.zip win64
go clean
env GO386=387 GOOS=openbsd GOARCH=386 go build -mod vendor -ldflags="-s -w"
env GO386=softfloat GOOS=openbsd GOARCH=386 go build -mod vendor -ldflags="-s -w"
mkdir openbsd-i386
ln dnscrypt-proxy openbsd-i386/
ln ../LICENSE example-dnscrypt-proxy.toml localhost.pem example-*.txt openbsd-i386/
@ -141,6 +141,13 @@ ln dnscrypt-proxy linux-mips64le/
ln ../LICENSE example-dnscrypt-proxy.toml localhost.pem example-*.txt linux-mips64le/
tar czpvf dnscrypt-proxy-linux_mips64le-${PACKAGE_VERSION:-dev}.tar.gz linux-mips64le
go clean
env CGO_ENABLED=0 GOOS=linux GOARCH=riscv64 go build -mod vendor -ldflags="-s -w"
mkdir linux-riscv64
ln dnscrypt-proxy linux-riscv64/
ln ../LICENSE example-dnscrypt-proxy.toml localhost.pem example-*.txt linux-riscv64/
tar czpvf dnscrypt-proxy-linux_riscv64-${PACKAGE_VERSION:-dev}.tar.gz linux-riscv64
go clean
env GOOS=darwin GOARCH=amd64 go build -mod vendor -ldflags="-s -w"
mkdir macos-x86_64

58
.ci/ci-package.sh Executable file
View File

@ -0,0 +1,58 @@
#! /bin/sh
PACKAGE_VERSION="$1"
cd dnscrypt-proxy || exit 1
# setup the environment
sudo apt-get update -y
sudo apt-get install -y wget wine dotnet-sdk-6.0
sudo dpkg --add-architecture i386 && sudo apt-get update && sudo apt-get install -y wine32
sudo apt-get install -y unzip
export WINEPREFIX="$HOME"/.wine32
export WINEARCH=win32
export WINEDEBUG=-all
wget https://dl.winehq.org/wine/wine-mono/8.1.0/wine-mono-8.1.0-x86.msi
WINEPREFIX="$HOME/.wine32" WINEARCH=win32 wineboot --init
WINEPREFIX="$HOME/.wine32" WINEARCH=win32 wine msiexec /i wine-mono-8.1.0-x86.msi
mkdir "$HOME"/.wine32/drive_c/temp
mkdir -p "$HOME"/.wine/drive_c/temp
wget https://github.com/wixtoolset/wix3/releases/download/wix3112rtm/wix311-binaries.zip -nv -O wix.zip
unzip wix.zip -d "$HOME"/wix
rm -f wix.zip
builddir=$(pwd)
srcdir=$(
cd ..
pwd
)
version=$PACKAGE_VERSION
cd "$HOME"/wix || exit
ln -s "$builddir" "$HOME"/wix/build
ln -s "$srcdir"/contrib/msi "$HOME"/wix/wixproj
echo "builddir: $builddir"
# build the msi's
#################
for arch in x64 x86; do
binpath="win32"
if [ "$arch" = "x64" ]; then
binpath="win64"
fi
echo $arch
wine candle.exe -dVersion="$version" -dPlatform=$arch -dPath=build\\$binpath -arch $arch wixproj\\dnscrypt.wxs -out build\\dnscrypt-$arch.wixobj
wine light.exe -out build\\dnscrypt-proxy-$arch-"$version".msi build\\dnscrypt-$arch.wixobj -sval
done
cd "$builddir" || exit

View File

@ -66,10 +66,15 @@ t || dig -p${DNS_PORT} +dnssec darpa.mil @127.0.0.1 2>&1 | grep -Fvq 'RRSIG' ||
t || dig -p${DNS_PORT} +dnssec www.darpa.mil @127.0.0.1 2>&1 | grep -Fvq 'RRSIG' || fail
section
t || dig -p${DNS_PORT} +short cloaked.com @127.0.0.1 | grep -Eq '1.1.1.1|1.0.0.1' || fail
t || dig -p${DNS_PORT} +short www.cloaked2.com @127.0.0.1 | grep -Eq '1.1.1.1|1.0.0.1' || fail
t || dig -p${DNS_PORT} +short cloakedunregistered.com @127.0.0.1 | grep -Eq '1.1.1.1|1.0.0.1' || fail
t || dig -p${DNS_PORT} +short MX cloakedunregistered.com @127.0.0.1 | grep -Fq 'locally blocked' || fail
t || dig -p${DNS_PORT} +short MX example.com @127.0.0.1 | grep -Fvq 'locally blocked' || fail
t || dig -p${DNS_PORT} NS cloakedunregistered.com @127.0.0.1 | grep -Fiq 'gtld-servers.net' || fail
t || dig -p${DNS_PORT} +short www.cloakedunregistered2.com @127.0.0.1 | grep -Eq '1.1.1.1|1.0.0.1' || fail
t || dig -p${DNS_PORT} +short www.dnscrypt-test @127.0.0.1 | grep -Fq '192.168.100.100' || fail
t || dig -p${DNS_PORT} a.www.dnscrypt-test @127.0.0.1 | grep -Fq 'NXDOMAIN' || fail
t || dig -p${DNS_PORT} +short ptr 101.100.168.192.in-addr.arpa. @127.0.0.1 | grep -Eq 'www.dnscrypt-test.com' || fail
t || dig -p${DNS_PORT} +short ptr 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.2.0.d.f.ip6.arpa. @127.0.0.1 | grep -Eq 'ipv6.dnscrypt-test.com' || fail
section
t || dig -p${DNS_PORT} telemetry.example @127.0.0.1 | grep -Fq 'locally blocked' || fail
@ -117,8 +122,8 @@ t || grep -Eq 'invalid.*SYNTH' query.log || fail
t || grep -Eq '168.192.in-addr.arpa.*SYNTH' query.log || fail
t || grep -Eq 'darpa.mil.*FORWARD' query.log || fail
t || grep -Eq 'www.darpa.mil.*FORWARD' query.log || fail
t || grep -Eq 'cloaked.com.*CLOAK' query.log || fail
t || grep -Eq 'www.cloaked2.com.*CLOAK' query.log || fail
t || grep -Eq 'cloakedunregistered.com.*CLOAK' query.log || fail
t || grep -Eq 'www.cloakedunregistered2.com.*CLOAK' query.log || fail
t || grep -Eq 'www.dnscrypt-test.*CLOAK' query.log || fail
t || grep -Eq 'a.www.dnscrypt-test.*NXDOMAIN' query.log || fail
t || grep -Eq 'telemetry.example.*REJECT' query.log || fail

View File

@ -1,3 +1,5 @@
cloaked.* one.one.one.one
*.cloaked2.* one.one.one.one # inline comment
=www.dnscrypt-test 192.168.100.100
cloakedunregistered.* one.one.one.one
*.cloakedunregistered2.* one.one.one.one # inline comment
=www.dnscrypt-test 192.168.100.100
=www.dnscrypt-test.com 192.168.100.101
=ipv6.dnscrypt-test.com fd02::1

View File

@ -9,7 +9,7 @@ file = 'query.log'
stamp = 'sdns://BQcAAAAAAAAADm9kb2guY3J5cHRvLnN4Ci9kbnMtcXVlcnk'
[static.'odohrelay']
stamp = 'sdns://hQcAAAAAAAAAACCi3jNJDEdtNW4tvHN8J3lpIklSa2Wrj7qaNCgEgci9_BpvZG9oLXJlbGF5LmVkZ2Vjb21wdXRlLmFwcAEv'
stamp = 'sdns://hQcAAAAAAAAADDg5LjM4LjEzMS4zOAAYb2RvaC1ubC5hbGVrYmVyZy5uZXQ6NDQzBi9wcm94eQ'
[anonymized_dns]
routes = [

View File

@ -10,6 +10,7 @@ block_unqualified = true
block_undelegated = true
forwarding_rules = 'forwarding-rules.txt'
cloaking_rules = 'cloaking-rules.txt'
cloak_ptr = true
cache = true
[local_doh]

View File

@ -13,9 +13,7 @@ cache = true
[query_log]
file = 'query.log'
[static]
[static.'myserver']
stamp = 'sdns://AQcAAAAAAAAADjIxMi40Ny4yMjguMTM2IOgBuE6mBr-wusDOQ0RbsV66ZLAvo8SqMa4QY2oHkDJNHzIuZG5zY3J5cHQtY2VydC5mci5kbnNjcnlwdC5vcmc'
stamp = 'sdns://AQcAAAAAAAAADjIxMi40Ny4yMjguMTM2IOgBuE6mBr-wusDOQ0RbsV66ZLAvo8SqMa4QY2oHkDJNHzIuZG5zY3J5cHQtY2VydC5mci5kbnNjcnlwdC5vcmc'

12
.github/workflows/autocloser.yml vendored Normal file
View File

@ -0,0 +1,12 @@
name: Autocloser
on: [issues]
jobs:
autoclose:
runs-on: ubuntu-latest
steps:
- name: Autoclose issues that did not follow issue template
uses: roots/issue-closer@v1.2
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-close-message: "This issue was automatically closed because it did not follow the issue template. We use the issue tracker exclusively for bug reports and feature additions that have been previously discussed. However, this issue appears to be a support request. Please use the discussion forums for support requests."
issue-pattern: ".*(do we replicate the issue|Expected behavior|raised as discussion|# Impact).*"

View File

@ -13,15 +13,20 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
uses: github/codeql-action/init@v3
- name: Autobuild
uses: github/codeql-action/autobuild@v1
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1
uses: github/codeql-action/analyze@v3

View File

@ -25,21 +25,21 @@ jobs:
steps:
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
run: echo "VERSION=${GITHUB_REF/refs\/tags\//}" >> $GITHUB_OUTPUT
- name: Check out code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v2.1.4
uses: actions/setup-go@v5
with:
go-version: 1
check-latest: true
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Test suite
run: |
go version
go mod vendor
cd .ci
./ci-test.sh
cd -
@ -49,6 +49,11 @@ jobs:
run: |
.ci/ci-build.sh "${{ steps.get_version.outputs.VERSION }}"
- name: Package
if: startsWith(github.ref, 'refs/tags/')
run: |
.ci/ci-package.sh "${{ steps.get_version.outputs.VERSION }}"
- name: Install minisign and sign
if: startsWith(github.ref, 'refs/tags/')
run: |
@ -78,7 +83,7 @@ jobs:
prerelease: false
- name: Upload release assets
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@69320dbe05506a9a39fc8ae11030b214ec2d1f87
if: startsWith(github.ref, 'refs/tags/')
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@ -87,3 +92,4 @@ jobs:
dnscrypt-proxy/*.zip
dnscrypt-proxy/*.tar.gz
dnscrypt-proxy/*.minisig
dnscrypt-proxy/*.msi

View File

@ -6,7 +6,7 @@ jobs:
Scan-Build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v4
- name: Perform ShiftLeft Scan
uses: ShiftLeftSecurity/scan-action@master
@ -18,6 +18,6 @@ jobs:
output: reports
- name: Upload report
uses: github/codeql-action/upload-sarif@v1
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: reports

3
.gitignore vendored
View File

@ -14,3 +14,6 @@ dnscrypt-proxy/dnscrypt-proxy
.ci/*.md
.ci/*.md.minisig
.ci/test-dnscrypt-proxy.toml
contrib/msi/*.msi
contrib/msi/*.wixpdb
contrib/msi/*.wixobj

View File

@ -1,3 +1,58 @@
# Version 2.1.5
- dnscrypt-proxy can be compiled with Go 1.21.0+
- Responses to blocked queries now include extended error codes
- Reliability of connections using HTTP/3 has been improved
- New configuration directive: `tls_key_log_file`. When defined, this
is the path to a file where TLS secret keys will be written to, so
that DoH traffic can be locally inspected.
# Version 2.1.4
- Fixes a regression from version 2.1.3: when cloaking was enabled,
blocked responses were returned for records that were not A/AAAA/PTR
even for names that were not in the cloaked list.
# Version 2.1.3
- DNS-over-HTTP/3 (QUIC) should be more reliable. In particular,
version 2.1.2 required another (non-QUIC) resolver to be present for
bootstrapping, or the resolver's IP address to be present in the
stamp. This is not the case any more.
- dnscrypt-proxy is now compatible with Go 1.20+
- Commands (-check, -show-certs, -list, -list-all) now ignore log
files and directly output the result to the standard output.
- The `cert_ignore_timestamp` configuration switch is now documented.
It allows ignoring timestamps for DNSCrypt certificate verification,
until a first server is available. This should only be used on devices
that don't have any ways to set the clock before DNS service is up.
However, a safer alternative remains to use an NTP server with a fixed
IP address (such as time.google.com), configured in the captive portals
file.
- Cloaking: when a name is cloaked, unsupported record types now
return a blocked response rather than the actual records.
- systemd: report Ready earlier as dnscrypt-proxy can itself manage
retries for updates/refreshes.
# Version 2.1.2
- Support for DoH over HTTP/3 (DoH3, HTTP over QUIC) has been added.
Compatible servers will automatically use it. Note that QUIC uses UDP
(usually over port 443, like DNSCrypt) instead of TCP.
- In previous versions, memory usage kept growing due to channels not
being properly closed, causing goroutines to pile up. This was fixed,
resulting in an important reduction of memory usage. Thanks to
@lifenjoiner for investigating and fixing this!
- DNS64: `CNAME` records are now translated like other responses.
Thanks to @ignoramous for this!
- A relay whose name has been configured, but doesn't exist in the
list of available relays is now a hard error. Thanks to @lifenjoiner!
- Mutexes/locking: bug fixes and improvements, by @ignoramous
- Official packages now include linux/riscv64 builds.
- `dnscrypt-proxy -resolve` now reports if ECS (EDNS-clientsubnet) is
supported by the server.
- `dnscrypt-proxy -list` now includes ODoH (Oblivious DoH) servers.
- Local DoH: queries made using the `GET` method are now handled.
- The service can now be installed on OpenRC-based systems.
- `PTR` queries are now supported for cloaked domains. Contributed by
Ian Bashford, thanks!
# Version 2.1.1
This is a bugfix only release, addressing regressions introduced in
version 2.1.0:

View File

@ -1,6 +1,6 @@
ISC License
Copyright (c) 2018-2021, Frank Denis <j at pureftpd dot org>
Copyright (c) 2018-2023, Frank Denis <j at pureftpd dot org>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above

View File

@ -2,14 +2,14 @@
[![Financial Contributors on Open Collective](https://opencollective.com/dnscrypt/all/badge.svg?label=financial+contributors)](https://opencollective.com/dnscrypt)
[![DNSCrypt-Proxy Release](https://img.shields.io/github/release/dnscrypt/dnscrypt-proxy.svg?label=Latest%20Release&style=popout)](https://github.com/dnscrypt/dnscrypt-proxy/releases/latest)
[![Build Status](https://github.com/DNSCrypt/dnscrypt-proxy/workflows/CI%20and%20optionally%20publish/badge.svg)](https://github.com/DNSCrypt/dnscrypt-proxy/actions)
[![Build Status](https://github.com/DNSCrypt/dnscrypt-proxy/actions/workflows/releases.yml/badge.svg)](https://github.com/DNSCrypt/dnscrypt-proxy/actions/workflows/releases.yml)
![CodeQL scan](https://github.com/DNSCrypt/dnscrypt-proxy/workflows/CodeQL%20scan/badge.svg)
![ShiftLeft Scan](https://github.com/DNSCrypt/dnscrypt-proxy/workflows/ShiftLeft%20Scan/badge.svg)
[![#dnscrypt-proxy:matrix.org](https://img.shields.io/matrix/dnscrypt-proxy:matrix.org.svg?label=DNSCrypt-Proxy%20Matrix%20Chat&server_fqdn=matrix.org&style=popout)](https://matrix.to/#/#dnscrypt-proxy:matrix.org)
## Overview
A flexible DNS proxy, with support for modern encrypted DNS protocols such as [DNSCrypt v2](https://dnscrypt.info/protocol), [DNS-over-HTTPS](https://www.rfc-editor.org/rfc/rfc8484.txt), [Anonymized DNSCrypt](https://github.com/DNSCrypt/dnscrypt-protocol/blob/master/ANONYMIZED-DNSCRYPT.txt) and [ODoH (Oblivious DoH)](https://github.com/DNSCrypt/dnscrypt-resolvers/blob/master/v3/odoh.md).
A flexible DNS proxy, with support for modern encrypted DNS protocols such as [DNSCrypt v2](https://dnscrypt.info/protocol), [DNS-over-HTTPS](https://www.rfc-editor.org/rfc/rfc8484.txt), [Anonymized DNSCrypt](https://github.com/DNSCrypt/dnscrypt-protocol/blob/master/ANONYMIZED-DNSCRYPT.txt) and [ODoH (Oblivious DoH)](https://github.com/DNSCrypt/dnscrypt-resolvers/blob/master/v3/odoh-servers.md).
* **[dnscrypt-proxy documentation](https://dnscrypt.info/doc) ← Start here**
* [DNSCrypt project home page](https://dnscrypt.info/)
@ -25,7 +25,7 @@ Available as source code and pre-built binaries for most operating systems and a
## Features
* DNS traffic encryption and authentication. Supports DNS-over-HTTPS (DoH) using TLS 1.3, DNSCrypt, Anonymized DNS and ODoH
* DNS traffic encryption and authentication. Supports DNS-over-HTTPS (DoH) using TLS 1.3 and QUIC, DNSCrypt, Anonymized DNS and ODoH
* Client IP addresses can be hidden using Tor, SOCKS proxies or Anonymized DNS relays
* DNS query monitoring, with separate log files for regular and suspicious queries
* Filtering: block ads, malware, and other unwanted content. Compatible with all DNS services
@ -60,7 +60,8 @@ Up-to-date, pre-built binaries are available for:
* Linux/mips64le
* Linux/x86
* Linux/x86_64
* MacOS X
* macOS/arm64
* macOS/x86_64
* NetBSD/x86
* NetBSD/x86_64
* OpenBSD/x86
@ -74,7 +75,7 @@ How to use these files, as well as how to verify their signatures, are documente
### Code Contributors
This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
This project exists thanks to all the people who contribute.
<a href="https://github.com/dnscrypt/dnscrypt-proxy/graphs/contributors"><img src="https://opencollective.com/dnscrypt/contributors.svg?width=890&button=false" /></a>
### Financial Contributors

21
contrib/msi/Dockerfile Normal file
View File

@ -0,0 +1,21 @@
FROM ubuntu:latest
MAINTAINER dnscrypt-authors
RUN apt-get update && \
apt-get install -y wget wine dotnet-sdk-6.0 && \
dpkg --add-architecture i386 && apt-get update && apt-get install -y wine32
ENV WINEPREFIX=/root/.wine32 WINEARCH=win32 WINEDEBUG=-all
RUN wget https://dl.winehq.org/wine/wine-mono/8.1.0/wine-mono-8.1.0-x86.msi && \
WINEPREFIX="$HOME/.wine32" WINEARCH=win32 wineboot --init && \
WINEPREFIX="$HOME/.wine32" WINEARCH=win32 wine msiexec /i wine-mono-8.1.0-x86.msi && \
mkdir $WINEPREFIX/drive_c/temp && \
apt-get install -y unzip && \
wget https://github.com/wixtoolset/wix3/releases/download/wix3112rtm/wix311-binaries.zip -nv -O wix.zip && \
unzip wix.zip -d /wix && \
rm -f wix.zip
WORKDIR /wix

13
contrib/msi/README.md Normal file
View File

@ -0,0 +1,13 @@
# Scripts and utilities related to building an .msi (Microsoft Standard Installer) file.
## Docker test image for building an MSI locally
```sh
docker build . -f Dockerfile -t ubuntu:dnscrypt-msi
```
## Test building msi files for intel win32 & win64
```sh
./build.sh
```

30
contrib/msi/build.sh Executable file
View File

@ -0,0 +1,30 @@
#! /bin/sh
version=0.0.0
gitver=$(git describe --tags --always --match="[0-9]*.[0-9]*.[0-9]*" --exclude='*[^0-9.]*')
if [ "$gitver" != "" ]; then
version=$gitver
fi
# build the image by running: docker build . -f Dockerfile -t ubuntu:dnscrypt-msi
if [ "$(docker image list -q ubuntu:dnscrypt-msi)" = "" ]; then
docker build . -f Dockerfile -t ubuntu:dnscrypt-msi
fi
image=ubuntu:dnscrypt-msi
for arch in x64 x86; do
binpath="win32"
if [ "$arch" = "x64" ]; then
binpath="win64"
fi
src=$(
cd ../../dnscrypt-proxy/$binpath || exit
pwd
)
echo "$src"
docker run --rm -v "$(pwd)":/wixproj -v "$src":/src $image wine candle.exe -dVersion="$version" -dPlatform=$arch -dPath=\\src -arch $arch \\wixproj\\dnscrypt.wxs -out \\wixproj\\dnscrypt-$arch.wixobj
docker run --rm -v "$(pwd)":/wixproj -v "$src":/src $image wine light.exe -out \\wixproj\\dnscrypt-proxy-$arch-"$version".msi \\wixproj\\dnscrypt-$arch.wixobj -sval
done

60
contrib/msi/dnscrypt.wxs Normal file
View File

@ -0,0 +1,60 @@
<?xml version="1.0"?>
<?if $(var.Platform)="x64" ?>
<?define Program_Files="ProgramFiles64Folder"?>
<?else ?>
<?define Program_Files="ProgramFilesFolder"?>
<?endif ?>
<?ifndef var.Version?>
<?error Undefined Version variable?>
<?endif ?>
<?ifndef var.Path?>
<?error Undefined Path variable?>
<?endif ?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
<Product Id="*"
UpgradeCode="fbf99dd8-c21e-4f9b-a632-de53bb64c45e"
Name="dnscrypt-proxy"
Version="$(var.Version)"
Manufacturer="DNSCrypt"
Language="1033">
<Package InstallerVersion="200" Compressed="yes" Comments="Windows Installer Package" InstallScope="perMachine" />
<Media Id="1" Cabinet="product.cab" EmbedCab="yes" />
<MajorUpgrade DowngradeErrorMessage="A later version of [ProductName] is already installed. Setup will now exit." />
<Upgrade Id="fbf99dd8-c21e-4f9b-a632-de53bb64c45e">
<UpgradeVersion Minimum="$(var.Version)" OnlyDetect="yes" Property="NEWERVERSIONDETECTED" />
<UpgradeVersion Minimum="2.1.0" Maximum="$(var.Version)" IncludeMinimum="yes" IncludeMaximum="no" Property="OLDERVERSIONBEINGUPGRADED" />
</Upgrade>
<Condition Message="A newer version of this software is already installed.">NOT NEWERVERSIONDETECTED</Condition>
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="$(var.Program_Files)">
<Directory Id="INSTALLDIR" Name="DNSCrypt">
<Component Id="ApplicationFiles" Guid="7d693c0b-71d8-436a-9c84-60a11dc74092">
<File Id="dnscryptproxy.exe" KeyPath="yes" Source="$(var.Path)\dnscrypt-proxy.exe" DiskId="1"/>
<File Source="$(var.Path)\LICENSE"></File>
<File Source="$(var.Path)\service-install.bat"></File>
<File Source="$(var.Path)\service-restart.bat"></File>
<File Source="$(var.Path)\service-uninstall.bat"></File>
<File Source="$(var.Path)\example-dnscrypt-proxy.toml"></File>
</Component>
<Component Id="ConfigInstall" Guid="db7b691e-f7c7-4c9a-92e1-c6f21ce6430f" KeyPath="yes">
<Condition><![CDATA[CONFIGFILE]]></Condition>
<CopyFile Id="dnscryptproxytoml" DestinationDirectory="INSTALLDIR" DestinationName="dnscrypt-proxy.toml" SourceProperty="CONFIGFILE">
</CopyFile>
<RemoveFile Id="RemoveConfig" Directory="INSTALLDIR" Name="dnscrypt-proxy.toml" On="uninstall" />
</Component>
</Directory>
</Directory>
</Directory>
<Feature Id="Complete" Level="1">
<ComponentRef Id="ApplicationFiles" />
<ComponentRef Id="ConfigInstall" />
</Feature>
</Product>
</Wix>

View File

@ -4,6 +4,7 @@ import (
"fmt"
"net"
"strings"
"sync"
"time"
"github.com/jedisct1/dlog"
@ -15,14 +16,13 @@ type CaptivePortalEntryIPs []net.IP
type CaptivePortalMap map[string]CaptivePortalEntryIPs
type CaptivePortalHandler struct {
cancelChannels []chan struct{}
wg sync.WaitGroup
cancelChannel chan struct{}
}
func (captivePortalHandler *CaptivePortalHandler) Stop() {
for _, cancelChannel := range captivePortalHandler.cancelChannels {
cancelChannel <- struct{}{}
<-cancelChannel
}
close(captivePortalHandler.cancelChannel)
captivePortalHandler.wg.Wait()
}
func (ipsMap *CaptivePortalMap) GetEntry(msg *dns.Msg) (*dns.Question, *CaptivePortalEntryIPs) {
@ -116,20 +116,30 @@ func handleColdStartClient(clientPc *net.UDPConn, cancelChannel chan struct{}, i
return false
}
func addColdStartListener(proxy *Proxy, ipsMap *CaptivePortalMap, listenAddrStr string, cancelChannel chan struct{}) error {
listenUDPAddr, err := net.ResolveUDPAddr("udp", listenAddrStr)
func addColdStartListener(
ipsMap *CaptivePortalMap,
listenAddrStr string,
captivePortalHandler *CaptivePortalHandler,
) error {
network := "udp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
network = "udp4"
}
listenUDPAddr, err := net.ResolveUDPAddr(network, listenAddrStr)
if err != nil {
return err
}
clientPc, err := net.ListenUDP("udp", listenUDPAddr)
clientPc, err := net.ListenUDP(network, listenUDPAddr)
if err != nil {
return err
}
captivePortalHandler.wg.Add(1)
go func() {
for !handleColdStartClient(clientPc, cancelChannel, ipsMap) {
for !handleColdStartClient(clientPc, captivePortalHandler.cancelChannel, ipsMap) {
}
clientPc.Close()
cancelChannel <- struct{}{}
captivePortalHandler.wg.Done()
}()
return nil
}
@ -138,13 +148,13 @@ func ColdStart(proxy *Proxy) (*CaptivePortalHandler, error) {
if len(proxy.captivePortalMapFile) == 0 {
return nil, nil
}
bin, err := ReadTextFile(proxy.captivePortalMapFile)
lines, err := ReadTextFile(proxy.captivePortalMapFile)
if err != nil {
dlog.Warn(err)
return nil, err
}
ipsMap := make(CaptivePortalMap)
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -175,16 +185,19 @@ func ColdStart(proxy *Proxy) (*CaptivePortalHandler, error) {
ipsMap[name] = ips
}
listenAddrStrs := proxy.listenAddresses
cancelChannels := make([]chan struct{}, 0)
captivePortalHandler := CaptivePortalHandler{
cancelChannel: make(chan struct{}),
}
ok := false
for _, listenAddrStr := range listenAddrStrs {
cancelChannel := make(chan struct{})
if err := addColdStartListener(proxy, &ipsMap, listenAddrStr, cancelChannel); err == nil {
cancelChannels = append(cancelChannels, cancelChannel)
err = addColdStartListener(&ipsMap, listenAddrStr, &captivePortalHandler)
if err == nil {
ok = true
}
}
captivePortalHandler := CaptivePortalHandler{
cancelChannels: cancelChannels,
if ok {
err = nil
}
proxy.captivePortalMap = &ipsMap
return &captivePortalHandler, nil
return &captivePortalHandler, err
}

View File

@ -4,12 +4,14 @@ import (
"bytes"
"encoding/binary"
"errors"
"io/ioutil"
"net"
"os"
"path"
"strconv"
"strings"
"unicode"
"github.com/jedisct1/dlog"
)
type CryptoConstruction uint16
@ -96,20 +98,6 @@ func Max(a, b int) int {
return b
}
func MinF(a, b float64) float64 {
if a < b {
return a
}
return b
}
func MaxF(a, b float64) float64 {
if a > b {
return a
}
return b
}
func StringReverse(s string) string {
r := []rune(s)
for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
@ -170,10 +158,40 @@ func ExtractHostAndPort(str string, defaultPort int) (host string, port int) {
}
func ReadTextFile(filename string) (string, error) {
bin, err := ioutil.ReadFile(filename)
bin, err := os.ReadFile(filename)
if err != nil {
return "", err
}
bin = bytes.TrimPrefix(bin, []byte{0xef, 0xbb, 0xbf})
return string(bin), nil
}
func isDigit(b byte) bool { return b >= '0' && b <= '9' }
func maybeWritableByOtherUsers(p string) (bool, string, error) {
p = path.Clean(p)
for p != "/" && p != "." {
st, err := os.Stat(p)
if err != nil {
return false, p, err
}
mode := st.Mode()
if mode.Perm()&2 != 0 && !(st.IsDir() && mode&os.ModeSticky == os.ModeSticky) {
return true, p, nil
}
p = path.Dir(p)
}
return false, "", nil
}
func WarnIfMaybeWritableByOtherUsers(p string) {
if ok, px, err := maybeWritableByOtherUsers(p); ok {
if px == p {
dlog.Criticalf("[%s] is writable by other system users - If this is not intentional, it is recommended to fix the access permissions", p)
} else {
dlog.Warnf("[%s] can be modified by other system users because [%s] is writable by other users - If this is not intentional, it is recommended to fix the access permissions", p, px)
}
} else if err != nil {
dlog.Warnf("Error while checking if [%s] is accessible: [%s] : [%s]", p, px, err)
}
}

View File

@ -38,9 +38,11 @@ type Config struct {
LocalDoH LocalDoHConfig `toml:"local_doh"`
UserName string `toml:"user_name"`
ForceTCP bool `toml:"force_tcp"`
HTTP3 bool `toml:"http3"`
Timeout int `toml:"timeout"`
KeepAlive int `toml:"keepalive"`
Proxy string `toml:"proxy"`
CertRefreshConcurrency int `toml:"cert_refresh_concurrency"`
CertRefreshDelay int `toml:"cert_refresh_delay"`
CertIgnoreTimestamp bool `toml:"cert_ignore_timestamp"`
EphemeralKeys bool `toml:"dnscrypt_ephemeral_keys"`
@ -91,6 +93,7 @@ type Config struct {
LogMaxBackups int `toml:"log_files_max_backups"`
TLSDisableSessionTickets bool `toml:"tls_disable_session_tickets"`
TLSCipherSuite []uint16 `toml:"tls_cipher_suite"`
TLSKeyLogFile string `toml:"tls_key_log_file"`
NetprobeAddress string `toml:"netprobe_address"`
NetprobeTimeout int `toml:"netprobe_timeout"`
OfflineMode bool `toml:"offline_mode"`
@ -98,6 +101,7 @@ type Config struct {
RefusedCodeInResponses bool `toml:"refused_code_in_responses"`
BlockedQueryResponse string `toml:"blocked_query_response"`
QueryMeta []string `toml:"query_meta"`
CloakedPTR bool `toml:"cloak_ptr"`
AnonymizedDNS AnonymizedDNSConfig `toml:"anonymized_dns"`
DoHClientX509Auth DoHClientX509AuthConfig `toml:"doh_client_x509_auth"`
DoHClientX509AuthLegacy DoHClientX509AuthConfig `toml:"tls_client_auth"`
@ -113,7 +117,9 @@ func newConfig() Config {
LocalDoH: LocalDoHConfig{Path: "/dns-query"},
Timeout: 5000,
KeepAlive: 5,
CertRefreshConcurrency: 10,
CertRefreshDelay: 240,
HTTP3: false,
CertIgnoreTimestamp: false,
EphemeralKeys: false,
Cache: true,
@ -140,6 +146,7 @@ func newConfig() Config {
LogMaxBackups: 1,
TLSDisableSessionTickets: false,
TLSCipherSuite: nil,
TLSKeyLogFile: "",
NetprobeTimeout: 60,
OfflineMode: false,
RefusedCodeInResponses: false,
@ -154,6 +161,7 @@ func newConfig() Config {
AnonymizedDNS: AnonymizedDNSConfig{
DirectCertFallback: true,
},
CloakedPTR: false,
}
}
@ -253,7 +261,7 @@ type ServerSummary struct {
IPv6 bool `json:"ipv6"`
Addrs []string `json:"addrs,omitempty"`
Ports []int `json:"ports"`
DNSSEC bool `json:"dnssec"`
DNSSEC *bool `json:"dnssec,omitempty"`
NoLog bool `json:"nolog"`
NoFilter bool `json:"nofilter"`
Description string `json:"description,omitempty"`
@ -284,6 +292,7 @@ type ConfigFlags struct {
Resolve *string
List *bool
ListAll *bool
IncludeRelays *bool
JSONOutput *bool
Check *bool
ConfigFile *string
@ -312,8 +321,12 @@ func findConfigFile(configFile *string) (string, error) {
func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
foundConfigFile, err := findConfigFile(flags.ConfigFile)
if err != nil {
return fmt.Errorf("Unable to load the configuration file [%s] -- Maybe use the -config command-line switch?", *flags.ConfigFile)
return fmt.Errorf(
"Unable to load the configuration file [%s] -- Maybe use the -config command-line switch?",
*flags.ConfigFile,
)
}
WarnIfMaybeWritableByOtherUsers(foundConfigFile)
config := newConfig()
md, err := toml.DecodeFile(foundConfigFile, &config)
if err != nil {
@ -339,7 +352,10 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
dlog.SetLogLevel(dlog.SeverityInfo)
}
dlog.TruncateLogFile(config.LogFileLatest)
if config.UseSyslog {
proxy.showCerts = *flags.ShowCerts || len(os.Getenv("SHOW_CERTS")) > 0
isCommandMode := *flags.Check || proxy.showCerts || *flags.List || *flags.ListAll
if isCommandMode {
} else if config.UseSyslog {
dlog.UseSyslog(true)
} else if config.LogFile != nil {
dlog.UseLogFile(*config.LogFile)
@ -369,6 +385,7 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
proxy.xTransport.tlsDisableSessionTickets = config.TLSDisableSessionTickets
proxy.xTransport.tlsCipherSuite = config.TLSCipherSuite
proxy.xTransport.mainProto = proxy.mainProto
proxy.xTransport.http3 = config.HTTP3
if len(config.BootstrapResolvers) == 0 && len(config.BootstrapResolversLegacy) > 0 {
dlog.Warnf("fallback_resolvers was renamed to bootstrap_resolvers - Please update your configuration")
config.BootstrapResolvers = config.BootstrapResolversLegacy
@ -423,6 +440,7 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
if config.ForceTCP {
proxy.mainProto = "tcp"
}
proxy.certRefreshConcurrency = Max(1, config.CertRefreshConcurrency)
proxy.certRefreshDelay = time.Duration(Max(60, config.CertRefreshDelay)) * time.Minute
proxy.certRefreshDelayAfterFailure = time.Duration(10 * time.Second)
proxy.certIgnoreTimestamp = config.CertIgnoreTimestamp
@ -484,6 +502,7 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
proxy.cacheMaxTTL = config.CacheMaxTTL
proxy.rejectTTL = config.RejectTTL
proxy.cloakTTL = config.CloakTTL
proxy.cloakedPTR = config.CloakedPTR
proxy.queryMeta = config.QueryMeta
@ -616,6 +635,16 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
proxy.skipAnonIncompatibleResolvers = config.AnonymizedDNS.SkipIncompatible
proxy.anonDirectCertFallback = config.AnonymizedDNS.DirectCertFallback
if len(config.TLSKeyLogFile) > 0 {
f, err := os.OpenFile(config.TLSKeyLogFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o600)
if err != nil {
dlog.Fatalf("Unable to create key log file [%s]: [%s]", config.TLSKeyLogFile, err)
}
dlog.Warnf("TLS key log file [%s] enabled", config.TLSKeyLogFile)
proxy.xTransport.keyLogWriter = f
proxy.xTransport.rebuildTransport()
}
if config.DoHClientX509AuthLegacy.Creds != nil {
return errors.New("[tls_client_auth] has been renamed to [doh_client_x509_auth] - Update your config file")
}
@ -635,7 +664,9 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
}
// Backwards compatibility
config.BrokenImplementations.FragmentsBlocked = append(config.BrokenImplementations.FragmentsBlocked, config.BrokenImplementations.BrokenQueryPadding...)
config.BrokenImplementations.FragmentsBlocked = append(
config.BrokenImplementations.FragmentsBlocked,
config.BrokenImplementations.BrokenQueryPadding...)
proxy.serversBlockingFragments = config.BrokenImplementations.FragmentsBlocked
@ -686,8 +717,7 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
} else if len(config.BootstrapResolvers) > 0 {
netprobeAddress = config.BootstrapResolvers[0]
}
proxy.showCerts = *flags.ShowCerts || len(os.Getenv("SHOW_CERTS")) > 0
if !*flags.Check && !*flags.ShowCerts && !*flags.List && !*flags.ListAll {
if !isCommandMode {
if err := NetProbe(proxy, netprobeAddress, netprobeTimeout); err != nil {
return err
}
@ -704,7 +734,9 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
// if 'userName' is set and we are the parent process drop privilege and exit
if len(proxy.userName) > 0 && !proxy.child {
proxy.dropPrivilege(proxy.userName, FileDescriptors)
return errors.New("Dropping privileges is not supporting on this operating system. Unset `user_name` in the configuration file")
return errors.New(
"Dropping privileges is not supporting on this operating system. Unset `user_name` in the configuration file",
)
}
if !config.OfflineMode {
if err := config.loadSources(proxy); err != nil {
@ -715,7 +747,7 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
}
}
if *flags.List || *flags.ListAll {
if err := config.printRegisteredServers(proxy, *flags.JSONOutput); err != nil {
if err := config.printRegisteredServers(proxy, *flags.JSONOutput, *flags.IncludeRelays); err != nil {
return err
}
os.Exit(0)
@ -724,8 +756,12 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
hasSpecificRoutes := false
for _, server := range proxy.registeredServers {
if via, ok := (*proxy.routes)[server.name]; ok {
if server.stamp.Proto != stamps.StampProtoTypeDNSCrypt && server.stamp.Proto != stamps.StampProtoTypeODoHTarget {
dlog.Errorf("DNS anonymization is only supported with the DNSCrypt and ODoH protocols - Connections to [%v] cannot be anonymized", server.name)
if server.stamp.Proto != stamps.StampProtoTypeDNSCrypt &&
server.stamp.Proto != stamps.StampProtoTypeODoHTarget {
dlog.Errorf(
"DNS anonymization is only supported with the DNSCrypt and ODoH protocols - Connections to [%v] cannot be anonymized",
server.name,
)
} else {
dlog.Noticef("Anonymized DNS: routing [%v] via %v", server.name, via)
}
@ -747,14 +783,54 @@ func ConfigLoad(proxy *Proxy, flags *ConfigFlags) error {
return nil
}
func (config *Config) printRegisteredServers(proxy *Proxy, jsonOutput bool) error {
func (config *Config) printRegisteredServers(proxy *Proxy, jsonOutput bool, includeRelays bool) error {
var summary []ServerSummary
if includeRelays {
for _, registeredRelay := range proxy.registeredRelays {
addrStr, port := registeredRelay.stamp.ServerAddrStr, stamps.DefaultPort
var hostAddr string
hostAddr, port = ExtractHostAndPort(addrStr, port)
addrs := make([]string, 0)
if (registeredRelay.stamp.Proto == stamps.StampProtoTypeDoH || registeredRelay.stamp.Proto == stamps.StampProtoTypeODoHTarget) &&
len(registeredRelay.stamp.ProviderName) > 0 {
providerName := registeredRelay.stamp.ProviderName
var host string
host, port = ExtractHostAndPort(providerName, port)
addrs = append(addrs, host)
}
if len(addrStr) > 0 {
addrs = append(addrs, hostAddr)
}
nolog := true
nofilter := true
if registeredRelay.stamp.Proto == stamps.StampProtoTypeODoHRelay {
nolog = registeredRelay.stamp.Props&stamps.ServerInformalPropertyNoLog != 0
}
serverSummary := ServerSummary{
Name: registeredRelay.name,
Proto: registeredRelay.stamp.Proto.String(),
IPv6: strings.HasPrefix(addrStr, "["),
Ports: []int{port},
Addrs: addrs,
NoLog: nolog,
NoFilter: nofilter,
Description: registeredRelay.description,
Stamp: registeredRelay.stamp.String(),
}
if jsonOutput {
summary = append(summary, serverSummary)
} else {
fmt.Println(serverSummary.Name)
}
}
}
for _, registeredServer := range proxy.registeredServers {
addrStr, port := registeredServer.stamp.ServerAddrStr, stamps.DefaultPort
var hostAddr string
hostAddr, port = ExtractHostAndPort(addrStr, port)
addrs := make([]string, 0)
if registeredServer.stamp.Proto == stamps.StampProtoTypeDoH && len(registeredServer.stamp.ProviderName) > 0 {
if (registeredServer.stamp.Proto == stamps.StampProtoTypeDoH || registeredServer.stamp.Proto == stamps.StampProtoTypeODoHTarget) &&
len(registeredServer.stamp.ProviderName) > 0 {
providerName := registeredServer.stamp.ProviderName
var host string
host, port = ExtractHostAndPort(providerName, port)
@ -763,13 +839,14 @@ func (config *Config) printRegisteredServers(proxy *Proxy, jsonOutput bool) erro
if len(addrStr) > 0 {
addrs = append(addrs, hostAddr)
}
dnssec := registeredServer.stamp.Props&stamps.ServerInformalPropertyDNSSEC != 0
serverSummary := ServerSummary{
Name: registeredServer.name,
Proto: registeredServer.stamp.Proto.String(),
IPv6: strings.HasPrefix(addrStr, "["),
Ports: []int{port},
Addrs: addrs,
DNSSEC: registeredServer.stamp.Props&stamps.ServerInformalPropertyDNSSEC != 0,
DNSSEC: &dnssec,
NoLog: registeredServer.stamp.Props&stamps.ServerInformalPropertyNoLog != 0,
NoFilter: registeredServer.stamp.Props&stamps.ServerInformalPropertyNoFilter != 0,
Description: registeredServer.description,
@ -829,7 +906,9 @@ func (config *Config) loadSources(proxy *Proxy) error {
}
proxy.registeredServers = append(proxy.registeredServers, RegisteredServer{name: serverName, stamp: stamp})
}
proxy.updateRegisteredServers()
if err := proxy.updateRegisteredServers(); err != nil {
return err
}
rs1 := proxy.registeredServers
rs2 := proxy.serversInfo.registeredServers
rand.Shuffle(len(rs1), func(i, j int) {
@ -860,12 +939,20 @@ func (config *Config) loadSource(proxy *Proxy, cfgSourceName string, cfgSource *
}
if cfgSource.RefreshDelay <= 0 {
cfgSource.RefreshDelay = 72
} else if cfgSource.RefreshDelay > 168 {
cfgSource.RefreshDelay = 168
}
source, err := NewSource(cfgSourceName, proxy.xTransport, cfgSource.URLs, cfgSource.MinisignKeyStr, cfgSource.CacheFile, cfgSource.FormatStr, time.Duration(cfgSource.RefreshDelay)*time.Hour, cfgSource.Prefix)
cfgSource.RefreshDelay = Min(168, Max(24, cfgSource.RefreshDelay))
source, err := NewSource(
cfgSourceName,
proxy.xTransport,
cfgSource.URLs,
cfgSource.MinisignKeyStr,
cfgSource.CacheFile,
cfgSource.FormatStr,
time.Duration(cfgSource.RefreshDelay)*time.Hour,
cfgSource.Prefix,
)
if err != nil {
if len(source.in) <= 0 {
if len(source.bin) <= 0 {
dlog.Criticalf("Unable to retrieve source [%s]: [%s]", cfgSourceName, err)
return err
}
@ -891,7 +978,10 @@ func cdFileDir(fileName string) error {
func cdLocal() {
exeFileName, err := os.Executable()
if err != nil {
dlog.Warnf("Unable to determine the executable directory: [%s] -- You will need to specify absolute paths in the configuration file", err)
dlog.Warnf(
"Unable to determine the executable directory: [%s] -- You will need to specify absolute paths in the configuration file",
err,
)
} else if err := os.Chdir(filepath.Dir(exeFileName)); err != nil {
dlog.Warnf("Unable to change working directory to [%s]: %s", exeFileName, err)
}

View File

@ -5,7 +5,6 @@ import (
crypto_rand "crypto/rand"
"crypto/sha512"
"errors"
"math/rand"
"github.com/jedisct1/dlog"
"github.com/jedisct1/xsecretbox"
@ -45,7 +44,12 @@ func unpad(packet []byte) ([]byte, error) {
}
}
func ComputeSharedKey(cryptoConstruction CryptoConstruction, secretKey *[32]byte, serverPk *[32]byte, providerName *string) (sharedKey [32]byte) {
func ComputeSharedKey(
cryptoConstruction CryptoConstruction,
secretKey *[32]byte,
serverPk *[32]byte,
providerName *string,
) (sharedKey [32]byte) {
if cryptoConstruction == XChacha20Poly1305 {
var err error
sharedKey, err = xsecretbox.SharedKey(*secretKey, *serverPk)
@ -68,9 +72,15 @@ func ComputeSharedKey(cryptoConstruction CryptoConstruction, secretKey *[32]byte
return
}
func (proxy *Proxy) Encrypt(serverInfo *ServerInfo, packet []byte, proto string) (sharedKey *[32]byte, encrypted []byte, clientNonce []byte, err error) {
func (proxy *Proxy) Encrypt(
serverInfo *ServerInfo,
packet []byte,
proto string,
) (sharedKey *[32]byte, encrypted []byte, clientNonce []byte, err error) {
nonce, clientNonce := make([]byte, NonceSize), make([]byte, HalfNonceSize)
crypto_rand.Read(clientNonce)
if _, err := crypto_rand.Read(clientNonce); err != nil {
return nil, nil, nil, err
}
copy(nonce, clientNonce)
var publicKey *[PublicKeySize]byte
if proxy.ephemeralKeys {
@ -93,14 +103,15 @@ func (proxy *Proxy) Encrypt(serverInfo *ServerInfo, packet []byte, proto string)
minQuestionSize = Max(proxy.questionSizeEstimator.MinQuestionSize(), minQuestionSize)
} else {
var xpad [1]byte
rand.Read(xpad[:])
if _, err := crypto_rand.Read(xpad[:]); err != nil {
return nil, nil, nil, err
}
minQuestionSize += int(xpad[0])
}
paddedLength := Min(MaxDNSUDPPacketSize, (Max(minQuestionSize, QueryOverhead)+1+63) & ^63)
if proto == "udp" && serverInfo.knownBugs.fragmentsBlocked {
if serverInfo.knownBugs.fragmentsBlocked && proto == "udp" {
paddedLength = MaxDNSUDPSafePacketSize
}
if serverInfo.Relay != nil && proto == "tcp" {
} else if serverInfo.Relay != nil && proto == "tcp" {
paddedLength = MaxDNSPacketSize
}
if QueryOverhead+len(packet)+1 > paddedLength {
@ -120,7 +131,12 @@ func (proxy *Proxy) Encrypt(serverInfo *ServerInfo, packet []byte, proto string)
return
}
func (proxy *Proxy) Decrypt(serverInfo *ServerInfo, sharedKey *[32]byte, encrypted []byte, nonce []byte) ([]byte, error) {
func (proxy *Proxy) Decrypt(
serverInfo *ServerInfo,
sharedKey *[32]byte,
encrypted []byte,
nonce []byte,
) ([]byte, error) {
serverMagicLen := len(ServerMagic)
responseHeaderLen := serverMagicLen + NonceSize
if len(encrypted) < responseHeaderLen+TagSize+int(MinDNSPacketSize) ||

View File

@ -20,12 +20,22 @@ type CertInfo struct {
ForwardSecurity bool
}
func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk ed25519.PublicKey, serverAddress string, providerName string, isNew bool, relay *DNSCryptRelay, knownBugs ServerBugs) (CertInfo, int, bool, error) {
func FetchCurrentDNSCryptCert(
proxy *Proxy,
serverName *string,
proto string,
pk ed25519.PublicKey,
serverAddress string,
providerName string,
isNew bool,
relay *DNSCryptRelay,
knownBugs ServerBugs,
) (CertInfo, int, bool, error) {
if len(pk) != ed25519.PublicKeySize {
return CertInfo{}, 0, false, errors.New("Invalid public key length")
}
if !strings.HasSuffix(providerName, ".") {
providerName = providerName + "."
providerName += "."
}
if serverName == nil {
serverName = &providerName
@ -34,7 +44,11 @@ func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk
query.SetQuestion(providerName, dns.TypeTXT)
if !strings.HasPrefix(providerName, "2.dnscrypt-cert.") {
if relay != nil && !proxy.anonDirectCertFallback {
dlog.Warnf("[%v] uses a non-standard provider name, enable direct cert fallback to use with a relay ('%v' doesn't start with '2.dnscrypt-cert.')", *serverName, providerName)
dlog.Warnf(
"[%v] uses a non-standard provider name, enable direct cert fallback to use with a relay ('%v' doesn't start with '2.dnscrypt-cert.')",
*serverName,
providerName,
)
} else {
dlog.Warnf("[%v] uses a non-standard provider name ('%v' doesn't start with '2.dnscrypt-cert.')", *serverName, providerName)
relay = nil
@ -44,7 +58,15 @@ func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk
if knownBugs.fragmentsBlocked {
tryFragmentsSupport = false
}
in, rtt, fragmentsBlocked, err := DNSExchange(proxy, proto, &query, serverAddress, relay, serverName, tryFragmentsSupport)
in, rtt, fragmentsBlocked, err := DNSExchange(
proxy,
proto,
&query,
serverAddress,
relay,
serverName,
tryFragmentsSupport,
)
if err != nil {
dlog.Noticef("[%s] TIMEOUT", *serverName)
return CertInfo{}, 0, fragmentsBlocked, err
@ -95,10 +117,17 @@ func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk
}
ttl := tsEnd - tsBegin
if ttl > 86400*7 {
dlog.Infof("[%v] the key validity period for this server is excessively long (%d days), significantly reducing reliability and forward security.", *serverName, ttl/86400)
dlog.Infof(
"[%v] the key validity period for this server is excessively long (%d days), significantly reducing reliability and forward security.",
*serverName,
ttl/86400,
)
daysLeft := (tsEnd - now) / 86400
if daysLeft < 1 {
dlog.Criticalf("[%v] certificate will expire today -- Switch to a different resolver as soon as possible", *serverName)
dlog.Criticalf(
"[%v] certificate will expire today -- Switch to a different resolver as soon as possible",
*serverName,
)
} else if daysLeft <= 7 {
dlog.Warnf("[%v] certificate is about to expire -- if you don't manage this server, tell the server operator about it", *serverName)
} else if daysLeft <= 30 {
@ -112,7 +141,13 @@ func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk
}
if !proxy.certIgnoreTimestamp {
if now > tsEnd || now < tsBegin {
dlog.Debugf("[%v] Certificate not valid at the current date (now: %v is not in [%v..%v])", *serverName, now, tsBegin, tsEnd)
dlog.Debugf(
"[%v] Certificate not valid at the current date (now: %v is not in [%v..%v])",
*serverName,
now,
tsBegin,
tsEnd,
)
continue
}
}
@ -148,7 +183,7 @@ func FetchCurrentDNSCryptCert(proxy *Proxy, serverName *string, proto string, pk
certCountStr = " - additional certificate"
}
if certInfo.CryptoConstruction == UndefinedConstruction {
return certInfo, 0, fragmentsBlocked, errors.New("No useable certificate found")
return certInfo, 0, fragmentsBlocked, errors.New("No usable certificate found")
}
return certInfo, int(rtt.Nanoseconds() / 1000000), fragmentsBlocked, nil
}

View File

@ -40,6 +40,11 @@ func TruncatedResponse(packet []byte) ([]byte, error) {
func RefusedResponseFromMessage(srcMsg *dns.Msg, refusedCode bool, ipv4 net.IP, ipv6 net.IP, ttl uint32) *dns.Msg {
dstMsg := EmptyResponseFromMessage(srcMsg)
ede := new(dns.EDNS0_EDE)
if edns0 := dstMsg.IsEdns0(); edns0 != nil {
edns0.Option = append(edns0.Option, ede)
}
ede.InfoCode = dns.ExtendedErrorCodeFiltered
if refusedCode {
dstMsg.Rcode = dns.RcodeRefused
} else {
@ -58,6 +63,7 @@ func RefusedResponseFromMessage(srcMsg *dns.Msg, refusedCode bool, ipv4 net.IP,
if rr.A != nil {
dstMsg.Answer = []dns.RR{rr}
sendHInfoResponse = false
ede.InfoCode = dns.ExtendedErrorCodeForgedAnswer
}
} else if ipv6 != nil && question.Qtype == dns.TypeAAAA {
rr := new(dns.AAAA)
@ -66,18 +72,24 @@ func RefusedResponseFromMessage(srcMsg *dns.Msg, refusedCode bool, ipv4 net.IP,
if rr.AAAA != nil {
dstMsg.Answer = []dns.RR{rr}
sendHInfoResponse = false
ede.InfoCode = dns.ExtendedErrorCodeForgedAnswer
}
}
if sendHInfoResponse {
hinfo := new(dns.HINFO)
hinfo.Hdr = dns.RR_Header{Name: question.Name, Rrtype: dns.TypeHINFO,
Class: dns.ClassINET, Ttl: ttl}
hinfo.Hdr = dns.RR_Header{
Name: question.Name, Rrtype: dns.TypeHINFO,
Class: dns.ClassINET, Ttl: ttl,
}
hinfo.Cpu = "This query has been locally blocked"
hinfo.Os = "by dnscrypt-proxy"
dstMsg.Answer = []dns.RR{hinfo}
} else {
ede.ExtraText = "This query has been locally blocked by dnscrypt-proxy"
}
}
return dstMsg
}
@ -135,7 +147,8 @@ func NormalizeQName(str string) (string, error) {
}
func getMinTTL(msg *dns.Msg, minTTL uint32, maxTTL uint32, cacheNegMinTTL uint32, cacheNegMaxTTL uint32) time.Duration {
if (msg.Rcode != dns.RcodeSuccess && msg.Rcode != dns.RcodeNameError) || (len(msg.Answer) <= 0 && len(msg.Ns) <= 0) {
if (msg.Rcode != dns.RcodeSuccess && msg.Rcode != dns.RcodeNameError) ||
(len(msg.Answer) <= 0 && len(msg.Ns) <= 0) {
return time.Duration(cacheNegMinTTL) * time.Second
}
var ttl uint32
@ -261,8 +274,6 @@ func removeEDNS0Options(msg *dns.Msg) bool {
return true
}
func isDigit(b byte) bool { return b >= '0' && b <= '9' }
func dddToByte(s []byte) byte {
return byte((s[0]-'0')*100 + (s[1]-'0')*10 + (s[2] - '0'))
}
@ -304,44 +315,53 @@ type DNSExchangeResponse struct {
err error
}
func DNSExchange(proxy *Proxy, proto string, query *dns.Msg, serverAddress string, relay *DNSCryptRelay, serverName *string, tryFragmentsSupport bool) (*dns.Msg, time.Duration, bool, error) {
func DNSExchange(
proxy *Proxy,
proto string,
query *dns.Msg,
serverAddress string,
relay *DNSCryptRelay,
serverName *string,
tryFragmentsSupport bool,
) (*dns.Msg, time.Duration, bool, error) {
for {
cancelChannel := make(chan struct{})
channel := make(chan DNSExchangeResponse)
maxTries := 3
channel := make(chan DNSExchangeResponse, 2*maxTries)
var err error
options := 0
for tries := 0; tries < 3; tries++ {
for tries := 0; tries < maxTries; tries++ {
if tryFragmentsSupport {
queryCopy := query.Copy()
queryCopy.Id += uint16(options)
go func(query *dns.Msg, delay time.Duration) {
option := _dnsExchange(proxy, proto, query, serverAddress, relay, 1500)
time.Sleep(delay)
option := DNSExchangeResponse{err: errors.New("Canceled")}
select {
case <-cancelChannel:
default:
option = _dnsExchange(proxy, proto, query, serverAddress, relay, 1500)
}
option.fragmentsBlocked = false
option.priority = 0
channel <- option
time.Sleep(delay)
select {
case <-cancelChannel:
return
default:
}
}(queryCopy, time.Duration(200*tries)*time.Millisecond)
options++
}
queryCopy := query.Copy()
queryCopy.Id += uint16(options)
go func(query *dns.Msg, delay time.Duration) {
option := _dnsExchange(proxy, proto, query, serverAddress, relay, 480)
time.Sleep(delay)
option := DNSExchangeResponse{err: errors.New("Canceled")}
select {
case <-cancelChannel:
default:
option = _dnsExchange(proxy, proto, query, serverAddress, relay, 480)
}
option.fragmentsBlocked = true
option.priority = 1
channel <- option
time.Sleep(delay)
select {
case <-cancelChannel:
return
default:
}
}(queryCopy, time.Duration(250*tries)*time.Millisecond)
options++
}
@ -375,12 +395,23 @@ func DNSExchange(proxy *Proxy, proto string, query *dns.Msg, serverAddress strin
}
return nil, 0, false, err
}
dlog.Infof("Unable to get the public key for [%v] via relay [%v], retrying over a direct connection", *serverName, relay.RelayUDPAddr.IP)
dlog.Infof(
"Unable to get the public key for [%v] via relay [%v], retrying over a direct connection",
*serverName,
relay.RelayUDPAddr.IP,
)
relay = nil
}
}
func _dnsExchange(proxy *Proxy, proto string, query *dns.Msg, serverAddress string, relay *DNSCryptRelay, paddedLen int) DNSExchangeResponse {
func _dnsExchange(
proxy *Proxy,
proto string,
query *dns.Msg,
serverAddress string,
relay *DNSCryptRelay,
paddedLen int,
) DNSExchangeResponse {
var packet []byte
var rtt time.Duration

View File

@ -6,10 +6,6 @@ import (
"github.com/VividCortex/ewma"
)
const (
SizeEstimatorEwmaDecay = 100.0
)
type QuestionSizeEstimator struct {
sync.RWMutex
minQuestionSize int
@ -17,7 +13,10 @@ type QuestionSizeEstimator struct {
}
func NewQuestionSizeEstimator() QuestionSizeEstimator {
return QuestionSizeEstimator{minQuestionSize: InitialMinQuestionSize, ewma: ewma.NewMovingAverage(SizeEstimatorEwmaDecay)}
return QuestionSizeEstimator{
minQuestionSize: InitialMinQuestionSize,
ewma: &ewma.SimpleEWMA{},
}
}
func (questionSizeEstimator *QuestionSizeEstimator) MinQuestionSize() int {

View File

@ -7,7 +7,7 @@
## going through a captive portal.
##
## This is a list of hard-coded IP addresses that will be returned when queries
## for these names are received, even before the operating system an interface
## for these names are received, even before the operating system reports an interface
## as usable for reaching the Internet.
##
## Note that IPv6 addresses don't need to be specified within brackets,
@ -21,3 +21,7 @@ dns.msftncsi.com 131.107.255.255, fd3e:4f5a:5b81::1
www.msftconnecttest.com 13.107.4.52
ipv6.msftconnecttest.com 2a01:111:2003::52
ipv4only.arpa 192.0.0.170, 192.0.0.171
## Adding IP addresses of NTP servers is also a good idea
time.google.com 216.239.35.0, 2001:4860:4806::

View File

@ -35,3 +35,10 @@ localhost ::1
# ads.* 192.168.100.1
# ads.* 192.168.100.2
# ads.* ::1
# PTR records can be created by setting cloak_ptr in the main configuration file
# Entries with wild cards will not have PTR records created, but multiple
# names for the same IP are supported
# example.com 192.168.100.1
# my.example.com 192.168.100.1

View File

@ -97,6 +97,13 @@ disabled_server_names = []
force_tcp = false
## Enable *experimental* support for HTTP/3 (DoH3, HTTP over QUIC)
## Note that, like DNSCrypt but unlike other HTTP versions, this uses
## UDP and (usually) port 443 instead of TCP.
http3 = false
## SOCKS proxy
## Uncomment the following line to route all TCP connections to a local Tor node
## Tor doesn't support UDP, so set `force_tcp` to `true` as well.
@ -118,7 +125,7 @@ force_tcp = false
timeout = 5000
## Keepalive for HTTP (HTTPS, HTTP/2) queries, in seconds
## Keepalive for HTTP (HTTPS, HTTP/2, HTTP/3) queries, in seconds
keepalive = 30
@ -128,7 +135,7 @@ keepalive = 30
## Multiple networks can be listed; they will be randomly chosen.
## These networks don't have to match your actual networks.
# edns_client_subnet = ["0.0.0.0/0", "2001:db8::/32"]
# edns_client_subnet = ['0.0.0.0/0', '2001:db8::/32']
## Response for blocked queries. Options are `refused`, `hinfo` (default) or
@ -176,11 +183,24 @@ keepalive = 30
# use_syslog = true
## The maximum concurrency to reload certificates from the resolvers.
## Default is 10.
# cert_refresh_concurrency = 10
## Delay, in minutes, after which certificates are reloaded
cert_refresh_delay = 240
## Initially don't check DNSCrypt server certificates for expiration, and
## only start checking them after a first successful connection to a resolver.
## This can be useful on routers with no battery-backed clock.
# cert_ignore_timestamp = false
## DNSCrypt: Create a new, unique key for every single DNS query
## This may improve privacy but can also have a significant impact on CPU usage
## Only enable if you don't have a lot of network load
@ -193,24 +213,30 @@ cert_refresh_delay = 240
# tls_disable_session_tickets = false
## DoH: Use a specific cipher suite instead of the server preference
## DoH: Use TLS 1.2 and specific cipher suite instead of the server preference
## 49199 = TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
## 49195 = TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
## 52392 = TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
## 52393 = TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
## 4865 = TLS_AES_128_GCM_SHA256
## 4867 = TLS_CHACHA20_POLY1305_SHA256
##
## On non-Intel CPUs such as MIPS routers and ARM systems (Android, Raspberry Pi...),
## the following suite improves performance.
## This may also help on Intel CPUs running 32-bit operating systems.
##
## Keep tls_cipher_suite empty if you have issues fetching sources or
## connecting to some DoH servers. Google and Cloudflare are fine with it.
## connecting to some DoH servers.
# tls_cipher_suite = [52392, 49199]
## Log TLS key material to a file, for debugging purposes only.
## This file will contain the TLS master key, which can be used to decrypt
## all TLS traffic to/from DoH servers.
## Never ever enable except for debugging purposes with a tool such as mitmproxy.
# tls_key_log_file = '/tmp/keylog.txt'
## Bootstrap resolvers
##
## These are normal, non-encrypted DNS resolvers, that will be only used
@ -241,10 +267,20 @@ cert_refresh_delay = 240
## not be sent there. If you're using DNSCrypt or Anonymized DNS and your
## lists are up to date, these resolvers will not even be used.
bootstrap_resolvers = ['9.9.9.9:53', '8.8.8.8:53']
bootstrap_resolvers = ['9.9.9.11:53', '8.8.8.8:53']
## Always use the bootstrap resolver before the system DNS settings.
## When internal DNS resolution is required, for example to retrieve
## the resolvers list:
##
## - queries will be sent to dnscrypt-proxy itself, if it is already
## running with active servers (*)
## - or else, queries will be sent to fallback servers
## - finally, if `ignore_system_dns` is `false`, queries will be sent
## to the system DNS
##
## (*) this is incompatible with systemd sockets.
## `listen_addrs` must not be empty.
ignore_system_dns = true
@ -318,6 +354,7 @@ block_ipv6 = false
## Immediately respond to A and AAAA queries for host names without a domain name
## This also prevents "dotless domain names" from being resolved upstream.
block_unqualified = true
@ -352,6 +389,8 @@ reject_ttl = 10
## Cloaking returns a predefined address for a specific name.
## In addition to acting as a HOSTS file, it can also return the IP address
## of a different name. It will also do CNAME flattening.
## If 'cloak_ptr' is set, then PTR (reverse lookups) are enabled
## for cloaking rules that do not contain wild cards.
##
## See the `example-cloaking-rules.txt` file for an example
@ -360,6 +399,7 @@ reject_ttl = 10
## TTL used when serving entries in cloaking-rules.txt
# cloak_ttl = 600
# cloak_ptr = false
@ -436,6 +476,8 @@ cache_neg_max_ttl = 600
## Certificate file and key - Note that the certificate has to be trusted.
## Can be generated using the following command:
## openssl req -x509 -nodes -newkey rsa:2048 -days 5000 -sha256 -keyout localhost.pem -out localhost.pem
## See the documentation (wiki) for more information.
# cert_file = 'localhost.pem'
@ -451,20 +493,20 @@ cache_neg_max_ttl = 600
[query_log]
## Path to the query log file (absolute, or relative to the same directory as the config file)
## Can be set to /dev/stdout in order to log to the standard output.
## Path to the query log file (absolute, or relative to the same directory as the config file)
## Can be set to /dev/stdout in order to log to the standard output.
# file = 'query.log'
# file = 'query.log'
## Query log format (currently supported: tsv and ltsv)
## Query log format (currently supported: tsv and ltsv)
format = 'tsv'
format = 'tsv'
## Do not log these query types, to reduce verbosity. Keep empty to log everything.
## Do not log these query types, to reduce verbosity. Keep empty to log everything.
# ignored_qtypes = ['DNSKEY', 'NS']
# ignored_qtypes = ['DNSKEY', 'NS']
@ -478,19 +520,19 @@ cache_neg_max_ttl = 600
[nx_log]
## Path to the query log file (absolute, or relative to the same directory as the config file)
## Path to the query log file (absolute, or relative to the same directory as the config file)
# file = 'nx.log'
# file = 'nx.log'
## Query log format (currently supported: tsv and ltsv)
## Query log format (currently supported: tsv and ltsv)
format = 'tsv'
format = 'tsv'
######################################################
# Pattern-based blocking (blocklists) #
# Pattern-based blocking (blocklists) #
######################################################
## Blocklists are made of one pattern per line. Example of valid patterns:
@ -508,19 +550,19 @@ cache_neg_max_ttl = 600
[blocked_names]
## Path to the file of blocking rules (absolute, or relative to the same directory as the config file)
## Path to the file of blocking rules (absolute, or relative to the same directory as the config file)
# blocked_names_file = 'blocked-names.txt'
# blocked_names_file = 'blocked-names.txt'
## Optional path to a file logging blocked queries
## Optional path to a file logging blocked queries
# log_file = 'blocked-names.log'
# log_file = 'blocked-names.log'
## Optional log format: tsv or ltsv (default: tsv)
## Optional log format: tsv or ltsv (default: tsv)
# log_format = 'tsv'
# log_format = 'tsv'
@ -536,24 +578,24 @@ cache_neg_max_ttl = 600
[blocked_ips]
## Path to the file of blocking rules (absolute, or relative to the same directory as the config file)
## Path to the file of blocking rules (absolute, or relative to the same directory as the config file)
# blocked_ips_file = 'blocked-ips.txt'
# blocked_ips_file = 'blocked-ips.txt'
## Optional path to a file logging blocked queries
## Optional path to a file logging blocked queries
# log_file = 'blocked-ips.log'
# log_file = 'blocked-ips.log'
## Optional log format: tsv or ltsv (default: tsv)
## Optional log format: tsv or ltsv (default: tsv)
# log_format = 'tsv'
# log_format = 'tsv'
######################################################
# Pattern-based allow lists (blocklists bypass) #
# Pattern-based allow lists (blocklists bypass) #
######################################################
## Allowlists support the same patterns as blocklists
@ -564,19 +606,19 @@ cache_neg_max_ttl = 600
[allowed_names]
## Path to the file of allow list rules (absolute, or relative to the same directory as the config file)
## Path to the file of allow list rules (absolute, or relative to the same directory as the config file)
# allowed_names_file = 'allowed-names.txt'
# allowed_names_file = 'allowed-names.txt'
## Optional path to a file logging allowed queries
## Optional path to a file logging allowed queries
# log_file = 'allowed-names.log'
# log_file = 'allowed-names.log'
## Optional log format: tsv or ltsv (default: tsv)
## Optional log format: tsv or ltsv (default: tsv)
# log_format = 'tsv'
# log_format = 'tsv'
@ -585,25 +627,25 @@ cache_neg_max_ttl = 600
#########################################################
## Allowed IP lists support the same patterns as IP blocklists
## If an IP response matches an allow ip entry, the corresponding session
## If an IP response matches an allowed entry, the corresponding session
## will bypass IP filters.
##
## Time-based rules are also supported to make some websites only accessible at specific times of the day.
[allowed_ips]
## Path to the file of allowed ip rules (absolute, or relative to the same directory as the config file)
## Path to the file of allowed ip rules (absolute, or relative to the same directory as the config file)
# allowed_ips_file = 'allowed-ips.txt'
# allowed_ips_file = 'allowed-ips.txt'
## Optional path to a file logging allowed queries
## Optional path to a file logging allowed queries
# log_file = 'allowed-ips.log'
# log_file = 'allowed-ips.log'
## Optional log format: tsv or ltsv (default: tsv)
## Optional log format: tsv or ltsv (default: tsv)
# log_format = 'tsv'
# log_format = 'tsv'
@ -624,21 +666,21 @@ cache_neg_max_ttl = 600
[schedules]
# [schedules.'time-to-sleep']
# mon = [{after='21:00', before='7:00'}]
# tue = [{after='21:00', before='7:00'}]
# wed = [{after='21:00', before='7:00'}]
# thu = [{after='21:00', before='7:00'}]
# fri = [{after='23:00', before='7:00'}]
# sat = [{after='23:00', before='7:00'}]
# sun = [{after='21:00', before='7:00'}]
# [schedules.time-to-sleep]
# mon = [{after='21:00', before='7:00'}]
# tue = [{after='21:00', before='7:00'}]
# wed = [{after='21:00', before='7:00'}]
# thu = [{after='21:00', before='7:00'}]
# fri = [{after='23:00', before='7:00'}]
# sat = [{after='23:00', before='7:00'}]
# sun = [{after='21:00', before='7:00'}]
# [schedules.'work']
# mon = [{after='9:00', before='18:00'}]
# tue = [{after='9:00', before='18:00'}]
# wed = [{after='9:00', before='18:00'}]
# thu = [{after='9:00', before='18:00'}]
# fri = [{after='9:00', before='17:00'}]
# [schedules.work]
# mon = [{after='9:00', before='18:00'}]
# tue = [{after='9:00', before='18:00'}]
# wed = [{after='9:00', before='18:00'}]
# thu = [{after='9:00', before='18:00'}]
# fri = [{after='9:00', before='17:00'}]
@ -660,46 +702,46 @@ cache_neg_max_ttl = 600
## If the `urls` property is missing, cache files and valid signatures
## must already be present. This doesn't prevent these cache files from
## expiring after `refresh_delay` hours.
## Cache freshness is checked every 24 hours, so values for 'refresh_delay'
## of less than 24 hours will have no effect.
## A maximum delay of 168 hours (1 week) is imposed to ensure cache freshness.
## `refreshed_delay` must be in the [24..168] interval.
## The minimum delay of 24 hours (1 day) avoids unnecessary requests to servers.
## The maximum delay of 168 hours (1 week) ensures cache freshness.
[sources]
## An example of a remote source from https://github.com/DNSCrypt/dnscrypt-resolvers
### An example of a remote source from https://github.com/DNSCrypt/dnscrypt-resolvers
[sources.'public-resolvers']
urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/public-resolvers.md', 'https://download.dnscrypt.info/resolvers-list/v3/public-resolvers.md', 'https://ipv6.download.dnscrypt.info/resolvers-list/v3/public-resolvers.md', 'https://download.dnscrypt.net/resolvers-list/v3/public-resolvers.md']
[sources.public-resolvers]
urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/public-resolvers.md', 'https://download.dnscrypt.info/resolvers-list/v3/public-resolvers.md']
cache_file = 'public-resolvers.md'
minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
refresh_delay = 72
prefix = ''
## Anonymized DNS relays
### Anonymized DNS relays
[sources.'relays']
urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/relays.md', 'https://download.dnscrypt.info/resolvers-list/v3/relays.md', 'https://ipv6.download.dnscrypt.info/resolvers-list/v3/relays.md', 'https://download.dnscrypt.net/resolvers-list/v3/relays.md']
[sources.relays]
urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/relays.md', 'https://download.dnscrypt.info/resolvers-list/v3/relays.md']
cache_file = 'relays.md'
minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
refresh_delay = 72
prefix = ''
## ODoH (Oblivious DoH) servers and relays
### ODoH (Oblivious DoH) servers and relays
# [sources.'odoh-servers']
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/odoh-servers.md', 'https://download.dnscrypt.info/resolvers-list/v3/odoh-servers.md', 'https://ipv6.download.dnscrypt.info/resolvers-list/v3/odoh-servers.md', 'https://download.dnscrypt.net/resolvers-list/v3/odoh-servers.md']
# [sources.odoh-servers]
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/odoh-servers.md', 'https://download.dnscrypt.info/resolvers-list/v3/odoh-servers.md']
# cache_file = 'odoh-servers.md'
# minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
# refresh_delay = 24
# prefix = ''
# [sources.'odoh-relays']
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/odoh-relays.md', 'https://download.dnscrypt.info/resolvers-list/v3/odoh-relays.md', 'https://ipv6.download.dnscrypt.info/resolvers-list/v3/odoh-relays.md', 'https://download.dnscrypt.net/resolvers-list/v3/odoh-relays.md']
# [sources.odoh-relays]
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/odoh-relays.md', 'https://download.dnscrypt.info/resolvers-list/v3/odoh-relays.md']
# cache_file = 'odoh-relays.md'
# minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
# refresh_delay = 24
# prefix = ''
## Quad9
### Quad9
# [sources.quad9-resolvers]
# urls = ['https://www.quad9.net/quad9-resolvers.md']
@ -707,14 +749,22 @@ cache_neg_max_ttl = 600
# cache_file = 'quad9-resolvers.md'
# prefix = 'quad9-'
## Another example source, with resolvers censoring some websites not appropriate for children
## This is a subset of the `public-resolvers` list, so enabling both is useless
### Another example source, with resolvers censoring some websites not appropriate for children
### This is a subset of the `public-resolvers` list, so enabling both is useless.
# [sources.'parental-control']
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/parental-control.md', 'https://download.dnscrypt.info/resolvers-list/v3/parental-control.md', 'https://ipv6.download.dnscrypt.info/resolvers-list/v3/parental-control.md', 'https://download.dnscrypt.net/resolvers-list/v3/parental-control.md']
# cache_file = 'parental-control.md'
# minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
# [sources.parental-control]
# urls = ['https://raw.githubusercontent.com/DNSCrypt/dnscrypt-resolvers/master/v3/parental-control.md', 'https://download.dnscrypt.info/resolvers-list/v3/parental-control.md']
# cache_file = 'parental-control.md'
# minisign_key = 'RWQf6LRCGA9i53mlYecO4IzT51TGPpvWucNSCh1CBM0QTaLn73Y7GFO3'
### dnscry.pt servers - See https://www.dnscry.pt
# [sources.dnscry-pt-resolvers]
# urls = ["https://www.dnscry.pt/resolvers.md"]
# minisign_key = "RWQM31Nwkqh01x88SvrBL8djp1NH56Rb4mKLHz16K7qsXgEomnDv6ziQ"
# cache_file = "dnscry.pt-resolvers.md"
# refresh_delay = 72
# prefix = "dnscry.pt-"
#########################################
@ -723,16 +773,16 @@ cache_neg_max_ttl = 600
[broken_implementations]
# Cisco servers currently cannot handle queries larger than 1472 bytes, and don't
# truncate reponses larger than questions as expected by the DNSCrypt protocol.
# This prevents large responses from being received over UDP and over relays.
#
# Older versions of the `dnsdist` server software had a bug with queries larger
# than 1500 bytes. This is fixed since `dnsdist` version 1.5.0, but
# some server may still run an outdated version.
#
# The list below enables workarounds to make non-relayed usage more reliable
# until the servers are fixed.
## Cisco servers currently cannot handle queries larger than 1472 bytes, and don't
## truncate responses larger than questions as expected by the DNSCrypt protocol.
## This prevents large responses from being received over UDP and over relays.
##
## Older versions of the `dnsdist` server software had a bug with queries larger
## than 1500 bytes. This is fixed since `dnsdist` version 1.5.0, but
## some server may still run an outdated version.
##
## The list below enables workarounds to make non-relayed usage more reliable
## until the servers are fixed.
fragments_blocked = ['cisco', 'cisco-ipv6', 'cisco-familyshield', 'cisco-familyshield-ipv6', 'cleanbrowsing-adult', 'cleanbrowsing-adult-ipv6', 'cleanbrowsing-family', 'cleanbrowsing-family-ipv6', 'cleanbrowsing-security', 'cleanbrowsing-security-ipv6']
@ -742,15 +792,14 @@ fragments_blocked = ['cisco', 'cisco-ipv6', 'cisco-familyshield', 'cisco-familys
# Certificate-based client authentication for DoH #
#################################################################
# Use a X509 certificate to authenticate yourself when connecting to DoH servers.
# This is only useful if you are operating your own, private DoH server(s).
# 'creds' maps servers to certificates, and supports multiple entries.
# If you are not using the standard root CA, an optional "root_ca"
# property set to the path to a root CRT file can be added to a server entry.
## Use a X509 certificate to authenticate yourself when connecting to DoH servers.
## This is only useful if you are operating your own, private DoH server(s).
## 'creds' maps servers to certificates, and supports multiple entries.
## If you are not using the standard root CA, an optional "root_ca"
## property set to the path to a root CRT file can be added to a server entry.
[doh_client_x509_auth]
#
# creds = [
# { server_name='*', client_cert='client.crt', client_key='client.key' }
# ]
@ -798,14 +847,14 @@ fragments_blocked = ['cisco', 'cisco-ipv6', 'cisco-familyshield', 'cisco-familys
# ]
# Skip resolvers incompatible with anonymization instead of using them directly
## Skip resolvers incompatible with anonymization instead of using them directly
skip_incompatible = false
# If public server certificates for a non-conformant server cannot be
# retrieved via a relay, try getting them directly. Actual queries
# will then always go through relays.
## If public server certificates for a non-conformant server cannot be
## retrieved via a relay, try getting them directly. Actual queries
## will then always go through relays.
# direct_cert_fallback = false
@ -833,13 +882,15 @@ skip_incompatible = false
[dns64]
## (Option 1) Static prefix(es) as Pref64::/n CIDRs.
## Static prefix(es) as Pref64::/n CIDRs
# prefix = ['64:ff9b::/96']
## (Option 2) DNS64-enabled resolver(s) to discover Pref64::/n CIDRs.
## DNS64-enabled resolver(s) to discover Pref64::/n CIDRs
## These resolvers are used to query for Well-Known IPv4-only Name (WKN) "ipv4only.arpa." to discover only.
## Set with your ISP's resolvers in case of custom prefixes (other than Well-Known Prefix 64:ff9b::/96).
## IMPORTANT: Default resolvers listed below support Well-Known Prefix 64:ff9b::/96 only.
# resolver = ['[2606:4700:4700::64]:53', '[2001:4860:4860::64]:53']
@ -853,5 +904,5 @@ skip_incompatible = false
[static]
# [static.'myserver']
# stamp = 'sdns://AQcAAAAAAAAAAAAQMi5kbnNjcnlwdC1jZXJ0Lg'
# [static.myserver]
# stamp = 'sdns://AQcAAAAAAAAAAAAQMi5kbnNjcnlwdC1jZXJ0Lg'

View File

@ -14,12 +14,23 @@
## If this happens, set `block_ipv6` to `false` in the main config file.
## Forward *.lan, *.local, *.home, *.home.arpa, *.internal and *.localdomain to 192.168.1.1
# lan 192.168.1.1
# local 192.168.1.1
# home 192.168.1.1
# home.arpa 192.168.1.1
# internal 192.168.1.1
# localdomain 192.168.1.1
# lan 192.168.1.1
# local 192.168.1.1
# home 192.168.1.1
# home.arpa 192.168.1.1
# internal 192.168.1.1
# localdomain 192.168.1.1
# 192.in-addr.arpa 192.168.1.1
## Forward queries for example.com and *.example.com to 9.9.9.9 and 8.8.8.8
# example.com 9.9.9.9,8.8.8.8
# example.com 9.9.9.9,8.8.8.8
## Forward queries to a resolver using IPv6
# ipv6.example.com [2001:DB8::42]:53
## Forward queries for .onion names to a local Tor client
## Tor must be configured with the following in the torrc file:
## DNSPort 9053
## AutomapHostsOnResolve 1
# onion 127.0.0.1:9053

View File

@ -5,8 +5,9 @@ package main
import (
"encoding/hex"
stamps "github.com/jedisct1/go-dnsstamps"
"testing"
stamps "github.com/jedisct1/go-dnsstamps"
)
func FuzzParseODoHTargetConfigs(f *testing.F) {
@ -23,8 +24,12 @@ func FuzzParseODoHTargetConfigs(f *testing.F) {
func FuzzParseStampParser(f *testing.F) {
f.Add("sdns://AgcAAAAAAAAACzEwNC4yMS42Ljc4AA1kb2guY3J5cHRvLnN4Ci9kbnMtcXVlcnk")
f.Add("sdns://AgcAAAAAAAAAGlsyNjA2OjQ3MDA6MzAzNzo6NjgxNTo2NGVdABJkb2gtaXB2Ni5jcnlwdG8uc3gKL2Rucy1xdWVyeQ")
f.Add("sdns://AQcAAAAAAAAADTUxLjE1LjEyMi4yNTAg6Q3ZfapcbHgiHKLF7QFoli0Ty1Vsz3RXs1RUbxUrwZAcMi5kbnNjcnlwdC1jZXJ0LnNjYWxld2F5LWFtcw")
f.Add("sdns://AQcAAAAAAAAAFlsyMDAxOmJjODoxODIwOjUwZDo6MV0g6Q3ZfapcbHgiHKLF7QFoli0Ty1Vsz3RXs1RUbxUrwZAcMi5kbnNjcnlwdC1jZXJ0LnNjYWxld2F5LWFtcw")
f.Add(
"sdns://AQcAAAAAAAAADTUxLjE1LjEyMi4yNTAg6Q3ZfapcbHgiHKLF7QFoli0Ty1Vsz3RXs1RUbxUrwZAcMi5kbnNjcnlwdC1jZXJ0LnNjYWxld2F5LWFtcw",
)
f.Add(
"sdns://AQcAAAAAAAAAFlsyMDAxOmJjODoxODIwOjUwZDo6MV0g6Q3ZfapcbHgiHKLF7QFoli0Ty1Vsz3RXs1RUbxUrwZAcMi5kbnNjcnlwdC1jZXJ0LnNjYWxld2F5LWFtcw",
)
f.Add("sdns://gQ8xNjMuMTcyLjE4MC4xMjU")
f.Add("sdns://BQcAAAAAAAAADm9kb2guY3J5cHRvLnN4Ci9kbnMtcXVlcnk")
f.Add("sdns://hQcAAAAAAAAAACCi3jNJDEdtNW4tvHN8J3lpIklSa2Wrj7qaNCgEgci9_BpvZG9oLXJlbGF5LmVkZ2Vjb21wdXRlLmFwcAEv")

View File

@ -1,8 +1,9 @@
package main
import (
"encoding/base64"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"strings"
@ -29,7 +30,27 @@ func (handler localDoHHandler) ServeHTTP(writer http.ResponseWriter, request *ht
writer.WriteHeader(404)
return
}
if request.Header.Get("Content-Type") != dataType {
packet := []byte{}
var err error
start := time.Now()
if request.Method == "POST" &&
request.Header.Get("Content-Type") == dataType {
packet, err = io.ReadAll(io.LimitReader(request.Body, int64(MaxDNSPacketSize)))
if err != nil {
dlog.Warnf("No body in a local DoH query")
return
}
} else if request.Method == "GET" && request.Header.Get("Accept") == dataType {
encodedPacket := request.URL.Query().Get("dns")
if len(encodedPacket) >= MinDNSPacketSize*4/3 && len(encodedPacket) <= MaxDNSPacketSize*4/3 {
packet, err = base64.RawURLEncoding.DecodeString(encodedPacket)
if err != nil {
dlog.Warnf("Invalid base64 in a local DoH query")
return
}
}
}
if len(packet) < MinDNSPacketSize {
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(400)
writer.Write([]byte("dnscrypt-proxy local DoH server\n"))
@ -41,12 +62,6 @@ func (handler localDoHHandler) ServeHTTP(writer http.ResponseWriter, request *ht
return
}
xClientAddr := net.Addr(clientAddr)
start := time.Now()
packet, err := ioutil.ReadAll(io.LimitReader(request.Body, MaxHTTPBodyLength))
if err != nil {
dlog.Warnf("No body in a local DoH query")
return
}
hasEDNS0Padding, err := hasEDNS0Padding(packet)
if err != nil {
writer.WriteHeader(400)
@ -76,6 +91,7 @@ func (handler localDoHHandler) ServeHTTP(writer http.ResponseWriter, request *ht
writer.Header().Set("X-Pad", pad)
}
writer.Header().Set("Content-Type", dataType)
writer.Header().Set("Content-Length", fmt.Sprint(len(response)))
writer.WriteHeader(200)
writer.Write(response)
}
@ -97,7 +113,25 @@ func (proxy *Proxy) localDoHListener(acceptPc *net.TCPListener) {
}
func dohPaddedLen(unpaddedLen int) int {
boundaries := [...]int{64, 128, 192, 256, 320, 384, 512, 704, 768, 896, 960, 1024, 1088, 1152, 2688, 4080, MaxDNSPacketSize}
boundaries := [...]int{
64,
128,
192,
256,
320,
384,
512,
704,
768,
896,
960,
1024,
1088,
1152,
2688,
4080,
MaxDNSPacketSize,
}
for _, boundary := range boundaries {
if boundary >= unpaddedLen {
return boundary

View File

@ -16,13 +16,25 @@ func Logger(logMaxSize int, logMaxAge int, logMaxBackups int, fileName string) i
if st.Mode().IsDir() {
dlog.Fatalf("[%v] is a directory", fileName)
}
fp, err := os.OpenFile(fileName, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
fp, err := os.OpenFile(fileName, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0o644)
if err != nil {
dlog.Fatalf("Unable to access [%v]: [%v]", fileName, err)
}
return fp
}
logger := &lumberjack.Logger{LocalTime: true, MaxSize: logMaxSize, MaxAge: logMaxAge, MaxBackups: logMaxBackups, Filename: fileName, Compress: true}
if fp, err := os.OpenFile(fileName, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0o644); err == nil {
fp.Close()
} else {
dlog.Errorf("Unable to create [%v]: [%v]", fileName, err)
}
logger := &lumberjack.Logger{
LocalTime: true,
MaxSize: logMaxSize,
MaxAge: logMaxAge,
MaxBackups: logMaxBackups,
Filename: fileName,
Compress: true,
}
return logger
}

View File

@ -15,7 +15,7 @@ import (
)
const (
AppVersion = "2.1.1"
AppVersion = "2.1.5"
DefaultConfigFileName = "dnscrypt-proxy.toml"
)
@ -27,13 +27,18 @@ type App struct {
}
func main() {
TimezoneSetup()
tzErr := TimezoneSetup()
dlog.Init("dnscrypt-proxy", dlog.SeverityNotice, "DAEMON")
if tzErr != nil {
dlog.Warnf("Timezone setup failed: [%v]", tzErr)
}
runtime.MemProfileRate = 0
seed := make([]byte, 8)
crypto_rand.Read(seed)
rand.Seed(int64(binary.LittleEndian.Uint64(seed[:])))
if _, err := crypto_rand.Read(seed); err != nil {
dlog.Fatal(err)
}
rand.Seed(int64(binary.LittleEndian.Uint64(seed)))
pwd, err := os.Getwd()
if err != nil {
@ -46,6 +51,7 @@ func main() {
flags.Resolve = flag.String("resolve", "", "resolve a DNS name (string can be <name> or <name>,<resolver address>)")
flags.List = flag.Bool("list", false, "print the list of available resolvers for the enabled filters")
flags.ListAll = flag.Bool("list-all", false, "print the complete list of available resolvers, ignoring filters")
flags.IncludeRelays = flag.Bool("include-relays", false, "include the list of available relays in the output of -list and -list-all")
flags.JSONOutput = flag.Bool("json", false, "output list as JSON")
flags.Check = flag.Bool("check", false, "check the configuration file and exit")
flags.ConfigFile = flag.String("config", DefaultConfigFileName, "Path to the configuration file")
@ -60,6 +66,10 @@ func main() {
os.Exit(0)
}
if fullexecpath, err := os.Executable(); err == nil {
WarnIfMaybeWritableByOtherUsers(fullexecpath)
}
app := &App{
flags: &flags,
}
@ -124,7 +134,7 @@ func (app *App) AppMain() {
dlog.Fatal(err)
}
if err := PidFileCreate(); err != nil {
dlog.Criticalf("Unable to create the PID file: %v", err)
dlog.Errorf("Unable to create the PID file: [%v]", err)
}
if err := app.proxy.InitPluginsGlobals(); err != nil {
dlog.Fatal(err)
@ -139,7 +149,9 @@ func (app *App) AppMain() {
}
func (app *App) Stop(service service.Service) error {
PidFileRemove()
if err := PidFileRemove(); err != nil {
dlog.Warnf("Failed to remove the PID file: [%v]", err)
}
dlog.Notice("Stopped.")
return nil
}

View File

@ -32,8 +32,9 @@ func NetProbe(proxy *Proxy, address string, timeout int) error {
pc, err := net.DialUDP("udp", nil, remoteUDPAddr)
if err == nil {
// Write at least 1 byte. This ensures that sockets are ready to use for writing.
// Windows specific: during the system startup, sockets can be created but the underlying buffers may not be setup yet. If this is the case
// Write fails with WSAENOBUFS: "An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full"
// Windows specific: during the system startup, sockets can be created but the underlying buffers may not be
// setup yet. If this is the case Write fails with WSAENOBUFS: "An operation on a socket could not be
// performed because the system lacked sufficient buffer space or because a queue was full"
_, err = pc.Write([]byte{0})
}
if err != nil {

View File

@ -181,7 +181,7 @@ func (q ODoHQuery) decryptResponse(response []byte) ([]byte, error) {
responseLength := binary.BigEndian.Uint16(responsePlaintext[0:2])
valid := 1
for i := 4 + int(responseLength); i < len(responsePlaintext); i++ {
valid = valid & subtle.ConstantTimeByteEq(response[i], 0x00)
valid &= subtle.ConstantTimeByteEq(response[i], 0x00)
}
if valid != 1 {
return nil, fmt.Errorf("Malformed response")

View File

@ -15,10 +15,10 @@ func PidFileCreate() error {
if pidFile == nil || len(*pidFile) == 0 {
return nil
}
if err := os.MkdirAll(filepath.Dir(*pidFile), 0755); err != nil {
if err := os.MkdirAll(filepath.Dir(*pidFile), 0o755); err != nil {
return err
}
return safefile.WriteFile(*pidFile, []byte(strconv.Itoa(os.Getpid())), 0644)
return safefile.WriteFile(*pidFile, []byte(strconv.Itoa(os.Getpid())), 0o644)
}
func PidFileRemove() error {

View File

@ -30,13 +30,13 @@ func (plugin *PluginAllowedIP) Description() string {
func (plugin *PluginAllowedIP) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of allowed IP rules from [%s]", proxy.allowedIPFile)
bin, err := ReadTextFile(proxy.allowedIPFile)
lines, err := ReadTextFile(proxy.allowedIPFile)
if err != nil {
return err
}
plugin.allowedPrefixes = iradix.New()
plugin.allowedIPs = make(map[string]interface{})
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -119,10 +119,14 @@ func (plugin *PluginAllowedIP) Eval(pluginsState *PluginsState, msg *dns.Msg) er
if plugin.logger != nil {
qName := pluginsState.qName
var clientIPStr string
if pluginsState.clientProto == "udp" {
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return nil
}
var line string
if plugin.format == "tsv" {
@ -130,7 +134,14 @@ func (plugin *PluginAllowedIP) Eval(pluginsState *PluginsState, msg *dns.Msg) er
year, month, day := now.Date()
hour, minute, second := now.Clock()
tsStr := fmt.Sprintf("[%d-%02d-%02d %02d:%02d:%02d]", year, int(month), day, hour, minute, second)
line = fmt.Sprintf("%s\t%s\t%s\t%s\t%s\n", tsStr, clientIPStr, StringQuote(qName), StringQuote(ipStr), StringQuote(reason))
line = fmt.Sprintf(
"%s\t%s\t%s\t%s\t%s\n",
tsStr,
clientIPStr,
StringQuote(qName),
StringQuote(ipStr),
StringQuote(reason),
)
} else if plugin.format == "ltsv" {
line = fmt.Sprintf("time:%d\thost:%s\tqname:%s\tip:%s\tmessage:%s\n", time.Now().Unix(), clientIPStr, StringQuote(qName), StringQuote(ipStr), StringQuote(reason))
} else {

View File

@ -29,13 +29,13 @@ func (plugin *PluginAllowName) Description() string {
func (plugin *PluginAllowName) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of allowed names from [%s]", proxy.allowNameFile)
bin, err := ReadTextFile(proxy.allowNameFile)
lines, err := ReadTextFile(proxy.allowNameFile)
if err != nil {
return err
}
plugin.allWeeklyRanges = proxy.allWeeklyRanges
plugin.patternMatcher = NewPatternMatcher()
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -96,10 +96,14 @@ func (plugin *PluginAllowName) Eval(pluginsState *PluginsState, msg *dns.Msg) er
pluginsState.sessionData["whitelisted"] = true
if plugin.logger != nil {
var clientIPStr string
if pluginsState.clientProto == "udp" {
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return nil
}
var line string
if plugin.format == "tsv" {

View File

@ -30,13 +30,13 @@ func (plugin *PluginBlockIP) Description() string {
func (plugin *PluginBlockIP) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of IP blocking rules from [%s]", proxy.blockIPFile)
bin, err := ReadTextFile(proxy.blockIPFile)
lines, err := ReadTextFile(proxy.blockIPFile)
if err != nil {
return err
}
plugin.blockedPrefixes = iradix.New()
plugin.blockedIPs = make(map[string]interface{})
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -123,10 +123,14 @@ func (plugin *PluginBlockIP) Eval(pluginsState *PluginsState, msg *dns.Msg) erro
if plugin.logger != nil {
qName := pluginsState.qName
var clientIPStr string
if pluginsState.clientProto == "udp" {
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return nil
}
var line string
if plugin.format == "tsv" {
@ -134,7 +138,14 @@ func (plugin *PluginBlockIP) Eval(pluginsState *PluginsState, msg *dns.Msg) erro
year, month, day := now.Date()
hour, minute, second := now.Clock()
tsStr := fmt.Sprintf("[%d-%02d-%02d %02d:%02d:%02d]", year, int(month), day, hour, minute, second)
line = fmt.Sprintf("%s\t%s\t%s\t%s\t%s\n", tsStr, clientIPStr, StringQuote(qName), StringQuote(ipStr), StringQuote(reason))
line = fmt.Sprintf(
"%s\t%s\t%s\t%s\t%s\n",
tsStr,
clientIPStr,
StringQuote(qName),
StringQuote(ipStr),
StringQuote(reason),
)
} else if plugin.format == "ltsv" {
line = fmt.Sprintf("time:%d\thost:%s\tqname:%s\tip:%s\tmessage:%s\n", time.Now().Unix(), clientIPStr, StringQuote(qName), StringQuote(ipStr), StringQuote(reason))
} else {

View File

@ -35,8 +35,10 @@ func (plugin *PluginBlockIPv6) Eval(pluginsState *PluginsState, msg *dns.Msg) er
}
synth := EmptyResponseFromMessage(msg)
hinfo := new(dns.HINFO)
hinfo.Hdr = dns.RR_Header{Name: question.Name, Rrtype: dns.TypeHINFO,
Class: dns.ClassINET, Ttl: 86400}
hinfo.Hdr = dns.RR_Header{
Name: question.Name, Rrtype: dns.TypeHINFO,
Class: dns.ClassINET, Ttl: 86400,
}
hinfo.Cpu = "AAAA queries have been locally blocked by dnscrypt-proxy"
hinfo.Os = "Set block_ipv6 to false to disable that feature"
synth.Answer = []dns.RR{hinfo}
@ -54,8 +56,10 @@ func (plugin *PluginBlockIPv6) Eval(pluginsState *PluginsState, msg *dns.Msg) er
soa.Minttl = 2400
soa.Expire = 604800
soa.Retry = 300
soa.Hdr = dns.RR_Header{Name: parentZone, Rrtype: dns.TypeSOA,
Class: dns.ClassINET, Ttl: 60}
soa.Hdr = dns.RR_Header{
Name: parentZone, Rrtype: dns.TypeSOA,
Class: dns.ClassINET, Ttl: 60,
}
synth.Ns = []dns.RR{soa}
pluginsState.synthResponse = synth
pluginsState.action = PluginsActionSynth

View File

@ -44,10 +44,14 @@ func (blockedNames *BlockedNames) check(pluginsState *PluginsState, qName string
pluginsState.returnCode = PluginsReturnCodeReject
if blockedNames.logger != nil {
var clientIPStr string
if pluginsState.clientProto == "udp" {
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return false, nil
}
var line string
if blockedNames.format == "tsv" {
@ -71,8 +75,7 @@ func (blockedNames *BlockedNames) check(pluginsState *PluginsState, qName string
// ---
type PluginBlockName struct {
}
type PluginBlockName struct{}
func (plugin *PluginBlockName) Name() string {
return "block_name"
@ -84,7 +87,7 @@ func (plugin *PluginBlockName) Description() string {
func (plugin *PluginBlockName) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of blocking rules from [%s]", proxy.blockNameFile)
bin, err := ReadTextFile(proxy.blockNameFile)
lines, err := ReadTextFile(proxy.blockNameFile)
if err != nil {
return err
}
@ -92,7 +95,7 @@ func (plugin *PluginBlockName) Init(proxy *Proxy) error {
allWeeklyRanges: proxy.allWeeklyRanges,
patternMatcher: NewPatternMatcher(),
}
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -148,8 +151,7 @@ func (plugin *PluginBlockName) Eval(pluginsState *PluginsState, msg *dns.Msg) er
// ---
type PluginBlockNameResponse struct {
}
type PluginBlockNameResponse struct{}
func (plugin *PluginBlockNameResponse) Name() string {
return "block_name"

View File

@ -119,9 +119,11 @@ var undelegatedSet = []string{
"envoy",
"example",
"f.f.ip6.arpa",
"fritz.box",
"grp",
"gw==",
"home",
"home.arpa",
"hub",
"internal",
"intra",
@ -134,6 +136,7 @@ var undelegatedSet = []string{
"localdomain",
"localhost",
"localnet",
"mail",
"modem",
"mynet",
"myrouter",

View File

@ -6,8 +6,7 @@ import (
"github.com/miekg/dns"
)
type PluginBlockUnqualified struct {
}
type PluginBlockUnqualified struct{}
func (plugin *PluginBlockUnqualified) Name() string {
return "block_unqualified"

View File

@ -6,8 +6,8 @@ import (
"sync"
"time"
lru "github.com/hashicorp/golang-lru"
"github.com/miekg/dns"
sieve "github.com/opencoff/go-sieve"
)
const StaleResponseTTL = 30 * time.Second
@ -19,7 +19,7 @@ type CachedResponse struct {
type CachedResponses struct {
sync.RWMutex
cache *lru.ARCCache
cache *sieve.Sieve[[32]byte, CachedResponse]
}
var cachedResponses CachedResponses
@ -45,8 +45,7 @@ func computeCacheKey(pluginsState *PluginsState, msg *dns.Msg) [32]byte {
// ---
type PluginCache struct {
}
type PluginCache struct{}
func (plugin *PluginCache) Name() string {
return "cache"
@ -76,12 +75,11 @@ func (plugin *PluginCache) Eval(pluginsState *PluginsState, msg *dns.Msg) error
cachedResponses.RUnlock()
return nil
}
cachedAny, ok := cachedResponses.cache.Get(cacheKey)
cached, ok := cachedResponses.cache.Get(cacheKey)
if !ok {
cachedResponses.RUnlock()
return nil
}
cached := cachedAny.(CachedResponse)
expiration := cached.expiration
synth := cached.msg.Copy()
cachedResponses.RUnlock()
@ -108,8 +106,7 @@ func (plugin *PluginCache) Eval(pluginsState *PluginsState, msg *dns.Msg) error
// ---
type PluginCacheResponse struct {
}
type PluginCacheResponse struct{}
func (plugin *PluginCacheResponse) Name() string {
return "cache_response"
@ -139,7 +136,13 @@ func (plugin *PluginCacheResponse) Eval(pluginsState *PluginsState, msg *dns.Msg
return nil
}
cacheKey := computeCacheKey(pluginsState, msg)
ttl := getMinTTL(msg, pluginsState.cacheMinTTL, pluginsState.cacheMaxTTL, pluginsState.cacheNegMinTTL, pluginsState.cacheNegMaxTTL)
ttl := getMinTTL(
msg,
pluginsState.cacheMinTTL,
pluginsState.cacheMaxTTL,
pluginsState.cacheNegMinTTL,
pluginsState.cacheNegMaxTTL,
)
cachedResponse := CachedResponse{
expiration: time.Now().Add(ttl),
msg: *msg,
@ -147,8 +150,8 @@ func (plugin *PluginCacheResponse) Eval(pluginsState *PluginsState, msg *dns.Msg
cachedResponses.Lock()
if cachedResponses.cache == nil {
var err error
cachedResponses.cache, err = lru.NewARC(pluginsState.cacheSize)
if err != nil {
cachedResponses.cache = sieve.New[[32]byte, CachedResponse](pluginsState.cacheSize)
if cachedResponses.cache == nil {
cachedResponses.Unlock()
return err
}

View File

@ -19,12 +19,14 @@ type CloakedName struct {
lastUpdate *time.Time
lineNo int
isIP bool
PTR []string
}
type PluginCloak struct {
sync.RWMutex
patternMatcher *PatternMatcher
ttl uint32
createPTR bool
}
func (plugin *PluginCloak) Name() string {
@ -37,14 +39,15 @@ func (plugin *PluginCloak) Description() string {
func (plugin *PluginCloak) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of cloaking rules from [%s]", proxy.cloakFile)
bin, err := ReadTextFile(proxy.cloakFile)
lines, err := ReadTextFile(proxy.cloakFile)
if err != nil {
return err
}
plugin.ttl = proxy.cloakTTL
plugin.createPTR = proxy.cloakedPTR
plugin.patternMatcher = NewPatternMatcher()
cloakedNames := make(map[string]*CloakedName)
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -67,11 +70,12 @@ func (plugin *PluginCloak) Init(proxy *Proxy) error {
if !found {
cloakedName = &CloakedName{}
}
if ip := net.ParseIP(target); ip != nil {
ip := net.ParseIP(target)
if ip != nil {
if ipv4 := ip.To4(); ipv4 != nil {
cloakedName.ipv4 = append((*cloakedName).ipv4, ipv4)
cloakedName.ipv4 = append(cloakedName.ipv4, ipv4)
} else if ipv6 := ip.To16(); ipv6 != nil {
cloakedName.ipv6 = append((*cloakedName).ipv6, ipv6)
cloakedName.ipv6 = append(cloakedName.ipv6, ipv6)
} else {
dlog.Errorf("Invalid IP address in cloaking rule at line %d", 1+lineNo)
continue
@ -82,6 +86,28 @@ func (plugin *PluginCloak) Init(proxy *Proxy) error {
}
cloakedName.lineNo = lineNo + 1
cloakedNames[line] = cloakedName
if !plugin.createPTR || strings.Contains(line, "*") || !cloakedName.isIP {
continue
}
var ptrLine string
if ipv4 := ip.To4(); ipv4 != nil {
reversed, _ := dns.ReverseAddr(ip.To4().String())
ptrLine = strings.TrimSuffix(reversed, ".")
} else {
reversed, _ := dns.ReverseAddr(cloakedName.ipv6[0].To16().String())
ptrLine = strings.TrimSuffix(reversed, ".")
}
ptrQueryLine := ptrEntryToQuery(ptrLine)
ptrCloakedName, found := cloakedNames[ptrQueryLine]
if !found {
ptrCloakedName = &CloakedName{}
}
ptrCloakedName.isIP = true
ptrCloakedName.PTR = append((*ptrCloakedName).PTR, ptrNameToFQDN(line))
ptrCloakedName.lineNo = lineNo + 1
cloakedNames[ptrQueryLine] = ptrCloakedName
}
for line, cloakedName := range cloakedNames {
if err := plugin.patternMatcher.Add(line, cloakedName, cloakedName.lineNo); err != nil {
@ -91,6 +117,15 @@ func (plugin *PluginCloak) Init(proxy *Proxy) error {
return nil
}
func ptrEntryToQuery(ptrEntry string) string {
return "=" + ptrEntry
}
func ptrNameToFQDN(ptrLine string) string {
ptrLine = strings.TrimPrefix(ptrLine, "=")
return ptrLine + "."
}
func (plugin *PluginCloak) Drop() error {
return nil
}
@ -101,7 +136,7 @@ func (plugin *PluginCloak) Reload() error {
func (plugin *PluginCloak) Eval(pluginsState *PluginsState, msg *dns.Msg) error {
question := msg.Question[0]
if question.Qclass != dns.ClassINET || (question.Qtype != dns.TypeA && question.Qtype != dns.TypeAAAA) {
if question.Qclass != dns.ClassINET || question.Qtype == dns.TypeNS || question.Qtype == dns.TypeSOA {
return nil
}
now := time.Now()
@ -111,6 +146,12 @@ func (plugin *PluginCloak) Eval(pluginsState *PluginsState, msg *dns.Msg) error
plugin.RUnlock()
return nil
}
if question.Qtype != dns.TypeA && question.Qtype != dns.TypeAAAA && question.Qtype != dns.TypePTR {
plugin.RUnlock()
pluginsState.action = PluginsActionReject
pluginsState.returnCode = PluginsReturnCodeCloak
return nil
}
cloakedName := xcloakedName.(*CloakedName)
ttl, expired := plugin.ttl, false
if cloakedName.lastUpdate != nil {
@ -157,15 +198,25 @@ func (plugin *PluginCloak) Eval(pluginsState *PluginsState, msg *dns.Msg) error
rr.A = ip
synth.Answer = append(synth.Answer, rr)
}
} else {
} else if question.Qtype == dns.TypeAAAA {
for _, ip := range cloakedName.ipv6 {
rr := new(dns.AAAA)
rr.Hdr = dns.RR_Header{Name: question.Name, Rrtype: dns.TypeAAAA, Class: dns.ClassINET, Ttl: ttl}
rr.AAAA = ip
synth.Answer = append(synth.Answer, rr)
}
} else if question.Qtype == dns.TypePTR {
for _, ptr := range cloakedName.PTR {
rr := new(dns.PTR)
rr.Hdr = dns.RR_Header{Name: question.Name, Rrtype: dns.TypePTR, Class: dns.ClassINET, Ttl: ttl}
rr.Ptr = ptr
synth.Answer = append(synth.Answer, rr)
}
}
rand.Shuffle(len(synth.Answer), func(i, j int) { synth.Answer[i], synth.Answer[j] = synth.Answer[j], synth.Answer[i] })
rand.Shuffle(
len(synth.Answer),
func(i, j int) { synth.Answer[i], synth.Answer[j] = synth.Answer[j], synth.Answer[i] },
)
pluginsState.synthResponse = synth
pluginsState.action = PluginsActionSynth
pluginsState.returnCode = PluginsReturnCodeCloak

View File

@ -34,19 +34,22 @@ func (plugin *PluginDNS64) Description() string {
}
func (plugin *PluginDNS64) Init(proxy *Proxy) error {
plugin.ipv4Resolver = proxy.listenAddresses[0] //recursively to ourselves
if len(proxy.listenAddresses) == 0 {
return errors.New("At least one listening IP address must be configured for the DNS64 plugin to work")
}
plugin.ipv4Resolver = proxy.listenAddresses[0] // query is sent to ourselves
plugin.pref64Mutex = new(sync.RWMutex)
plugin.proxy = proxy
if len(proxy.dns64Prefixes) != 0 {
plugin.pref64Mutex.RLock()
defer plugin.pref64Mutex.RUnlock()
plugin.pref64Mutex.Lock()
defer plugin.pref64Mutex.Unlock()
for _, prefStr := range proxy.dns64Prefixes {
_, pref, err := net.ParseCIDR(prefStr)
if err != nil {
return err
}
dlog.Infof("Registered DNS64 prefix [%s]", pref.String())
dlog.Noticef("Registered DNS64 prefix [%s]", pref.String())
plugin.pref64 = append(plugin.pref64, pref)
}
} else if len(proxy.dns64Resolvers) != 0 {
@ -54,7 +57,10 @@ func (plugin *PluginDNS64) Init(proxy *Proxy) error {
if err := plugin.refreshPref64(); err != nil {
return err
}
} else {
return nil
}
dlog.Notice("DNS64 map enabled")
return nil
}
@ -87,14 +93,22 @@ func (plugin *PluginDNS64) Eval(pluginsState *PluginsState, msg *dns.Msg) error
if !plugin.proxy.clientsCountInc() {
return errors.New("Too many concurrent connections to handle DNS64 subqueries")
}
respPacket := plugin.proxy.processIncomingQuery("trampoline", plugin.proxy.mainProto, msgAPacket, nil, nil, time.Now(), false)
respPacket := plugin.proxy.processIncomingQuery(
"trampoline",
plugin.proxy.mainProto,
msgAPacket,
nil,
nil,
time.Now(),
false,
)
plugin.proxy.clientsCountDec()
resp := dns.Msg{}
if err := resp.Unpack(respPacket); err != nil {
return err
}
if err != nil || resp.Rcode != dns.RcodeSuccess {
if resp.Rcode != dns.RcodeSuccess {
return nil
}
@ -110,10 +124,12 @@ func (plugin *PluginDNS64) Eval(pluginsState *PluginsState, msg *dns.Msg) error
}
}
synthAAAAs := make([]dns.RR, 0)
synth64 := make([]dns.RR, 0)
for _, answer := range resp.Answer {
header := answer.Header()
if header.Rrtype == dns.TypeA {
if header.Rrtype == dns.TypeCNAME {
synth64 = append(synth64, answer)
} else if header.Rrtype == dns.TypeA {
ttl := initialTTL
if ttl > header.Ttl {
ttl = header.Ttl
@ -121,24 +137,28 @@ func (plugin *PluginDNS64) Eval(pluginsState *PluginsState, msg *dns.Msg) error
ipv4 := answer.(*dns.A).A.To4()
if ipv4 != nil {
plugin.pref64Mutex.Lock()
plugin.pref64Mutex.RLock()
for _, prefix := range plugin.pref64 {
ipv6 := translateToIPv6(ipv4, prefix)
synthAAAA := new(dns.AAAA)
synthAAAA.Hdr = dns.RR_Header{Name: header.Name, Rrtype: dns.TypeAAAA, Class: header.Class, Ttl: ttl}
synthAAAA.Hdr = dns.RR_Header{
Name: header.Name,
Rrtype: dns.TypeAAAA,
Class: header.Class,
Ttl: ttl,
}
synthAAAA.AAAA = ipv6
synthAAAAs = append(synthAAAAs, synthAAAA)
synth64 = append(synth64, synthAAAA)
}
plugin.pref64Mutex.Unlock()
plugin.pref64Mutex.RUnlock()
}
}
}
synth := EmptyResponseFromMessage(msg)
synth.Answer = append(synth.Answer, synthAAAAs...)
msg.Answer = synth64
msg.AuthenticatedData = false
msg.SetEdns0(uint16(MaxDNSUDPSafePacketSize), false)
pluginsState.synthResponse = synth
pluginsState.action = PluginsActionSynth
pluginsState.returnCode = PluginsReturnCodeCloak
return nil
@ -173,7 +193,6 @@ func (plugin *PluginDNS64) fetchPref64(resolver string) error {
client := new(dns.Client)
resp, _, err := client.Exchange(msg, resolver)
if err != nil {
return err
}
@ -190,17 +209,18 @@ func (plugin *PluginDNS64) fetchPref64(resolver string) error {
if ipv6 != nil && len(ipv6) == net.IPv6len {
prefEnd := 0
if wka := net.IPv4(ipv6[12], ipv6[13], ipv6[14], ipv6[15]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //96
if wka := net.IPv4(ipv6[12], ipv6[13], ipv6[14], ipv6[15]); wka.Equal(rfc7050WKA1) ||
wka.Equal(rfc7050WKA2) { // 96
prefEnd = 12
} else if wka := net.IPv4(ipv6[9], ipv6[10], ipv6[11], ipv6[12]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //64
} else if wka := net.IPv4(ipv6[9], ipv6[10], ipv6[11], ipv6[12]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { // 64
prefEnd = 8
} else if wka := net.IPv4(ipv6[7], ipv6[9], ipv6[10], ipv6[11]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //56
} else if wka := net.IPv4(ipv6[7], ipv6[9], ipv6[10], ipv6[11]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { // 56
prefEnd = 7
} else if wka := net.IPv4(ipv6[6], ipv6[7], ipv6[9], ipv6[10]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //48
} else if wka := net.IPv4(ipv6[6], ipv6[7], ipv6[9], ipv6[10]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { // 48
prefEnd = 6
} else if wka := net.IPv4(ipv6[5], ipv6[6], ipv6[7], ipv6[9]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //40
} else if wka := net.IPv4(ipv6[5], ipv6[6], ipv6[7], ipv6[9]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { // 40
prefEnd = 5
} else if wka := net.IPv4(ipv6[4], ipv6[5], ipv6[6], ipv6[7]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { //32
} else if wka := net.IPv4(ipv6[4], ipv6[5], ipv6[6], ipv6[7]); wka.Equal(rfc7050WKA1) || wka.Equal(rfc7050WKA2) { // 32
prefEnd = 4
}
@ -222,8 +242,8 @@ func (plugin *PluginDNS64) fetchPref64(resolver string) error {
return errors.New("Empty Pref64 list")
}
plugin.pref64Mutex.RLock()
defer plugin.pref64Mutex.RUnlock()
plugin.pref64Mutex.Lock()
defer plugin.pref64Mutex.Unlock()
plugin.pref64 = prefixes
return nil
}
@ -235,8 +255,8 @@ func (plugin *PluginDNS64) refreshPref64() error {
}
}
plugin.pref64Mutex.Lock()
defer plugin.pref64Mutex.Unlock()
plugin.pref64Mutex.RLock()
defer plugin.pref64Mutex.RUnlock()
if len(plugin.pref64) == 0 {
return errors.New("Empty Pref64 list")
}

View File

@ -9,8 +9,7 @@ import (
"github.com/miekg/dns"
)
type PluginFirefox struct {
}
type PluginFirefox struct{}
func (plugin *PluginFirefox) Name() string {
return "firefox"

View File

@ -29,11 +29,11 @@ func (plugin *PluginForward) Description() string {
func (plugin *PluginForward) Init(proxy *Proxy) error {
dlog.Noticef("Loading the set of forwarding rules from [%s]", proxy.forwardFile)
bin, err := ReadTextFile(proxy.forwardFile)
lines, err := ReadTextFile(proxy.forwardFile)
if err != nil {
return err
}
for lineNo, line := range strings.Split(string(bin), "\n") {
for lineNo, line := range strings.Split(lines, "\n") {
line = TrimAndStripInlineComments(line)
if len(line) == 0 {
continue
@ -49,9 +49,16 @@ func (plugin *PluginForward) Init(proxy *Proxy) error {
var servers []string
for _, server := range strings.Split(serversStr, ",") {
server = strings.TrimSpace(server)
if net.ParseIP(server) != nil {
server = fmt.Sprintf("%s:%d", server, 53)
server = strings.TrimPrefix(server, "[")
server = strings.TrimSuffix(server, "]")
if ip := net.ParseIP(server); ip != nil {
if ip.To4() != nil {
server = fmt.Sprintf("%s:%d", server, 53)
} else {
server = fmt.Sprintf("[%s]:%d", server, 53)
}
}
dlog.Infof("Forwarding [%s] to %s", domain, server)
servers = append(servers, server)
}
if len(servers) == 0 {
@ -82,7 +89,9 @@ func (plugin *PluginForward) Eval(pluginsState *PluginsState, msg *dns.Msg) erro
if candidateLen > qNameLen {
continue
}
if qName[qNameLen-candidateLen:] == candidate.domain && (candidateLen == qNameLen || (qName[qNameLen-candidateLen-1] == '.')) {
if (qName[qNameLen-candidateLen:] == candidate.domain &&
(candidateLen == qNameLen || (qName[qNameLen-candidateLen-1] == '.'))) ||
(candidate.domain == ".") {
servers = candidate.servers
break
}

View File

@ -30,12 +30,18 @@ func (plugin *PluginGetSetPayloadSize) Eval(pluginsState *PluginsState, msg *dns
dnssec := false
if edns0 != nil {
pluginsState.maxUnencryptedUDPSafePayloadSize = int(edns0.UDPSize())
pluginsState.originalMaxPayloadSize = Max(pluginsState.maxUnencryptedUDPSafePayloadSize-ResponseOverhead, pluginsState.originalMaxPayloadSize)
pluginsState.originalMaxPayloadSize = Max(
pluginsState.maxUnencryptedUDPSafePayloadSize-ResponseOverhead,
pluginsState.originalMaxPayloadSize,
)
dnssec = edns0.Do()
}
var options *[]dns.EDNS0
pluginsState.dnssec = dnssec
pluginsState.maxPayloadSize = Min(MaxDNSUDPPacketSize-ResponseOverhead, Max(pluginsState.originalMaxPayloadSize, pluginsState.maxPayloadSize))
pluginsState.maxPayloadSize = Min(
MaxDNSUDPPacketSize-ResponseOverhead,
Max(pluginsState.originalMaxPayloadSize, pluginsState.maxPayloadSize),
)
if pluginsState.maxPayloadSize > 512 {
extra2 := []dns.RR{}
for _, extra := range msg.Extra {

View File

@ -43,17 +43,21 @@ func (plugin *PluginNxLog) Eval(pluginsState *PluginsState, msg *dns.Msg) error
if msg.Rcode != dns.RcodeNameError {
return nil
}
var clientIPStr string
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return nil
}
question := msg.Question[0]
qType, ok := dns.TypeToString[question.Qtype]
if !ok {
qType = string(qType)
}
var clientIPStr string
if pluginsState.clientProto == "udp" {
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
}
qName := pluginsState.qName
var line string

View File

@ -43,6 +43,16 @@ func (plugin *PluginQueryLog) Reload() error {
}
func (plugin *PluginQueryLog) Eval(pluginsState *PluginsState, msg *dns.Msg) error {
var clientIPStr string
switch pluginsState.clientProto {
case "udp":
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
case "tcp", "local_doh":
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
default:
// Ignore internal flow.
return nil
}
question := msg.Question[0]
qType, ok := dns.TypeToString[question.Qtype]
if !ok {
@ -55,12 +65,6 @@ func (plugin *PluginQueryLog) Eval(pluginsState *PluginsState, msg *dns.Msg) err
}
}
}
var clientIPStr string
if pluginsState.clientProto == "udp" {
clientIPStr = (*pluginsState.clientAddr).(*net.UDPAddr).IP.String()
} else {
clientIPStr = (*pluginsState.clientAddr).(*net.TCPAddr).IP.String()
}
qName := pluginsState.qName
if pluginsState.cacheHit {
@ -86,8 +90,16 @@ func (plugin *PluginQueryLog) Eval(pluginsState *PluginsState, msg *dns.Msg) err
year, month, day := now.Date()
hour, minute, second := now.Clock()
tsStr := fmt.Sprintf("[%d-%02d-%02d %02d:%02d:%02d]", year, int(month), day, hour, minute, second)
line = fmt.Sprintf("%s\t%s\t%s\t%s\t%s\t%dms\t%s\n", tsStr, clientIPStr, StringQuote(qName), qType, returnCode, requestDuration/time.Millisecond,
StringQuote(pluginsState.serverName))
line = fmt.Sprintf(
"%s\t%s\t%s\t%s\t%s\t%dms\t%s\n",
tsStr,
clientIPStr,
StringQuote(qName),
qType,
returnCode,
requestDuration/time.Millisecond,
StringQuote(pluginsState.serverName),
)
} else if plugin.format == "ltsv" {
cached := 0
if pluginsState.cacheHit {

View File

@ -18,8 +18,10 @@ func (plugin *PluginQueryMeta) Description() string {
func (plugin *PluginQueryMeta) Init(proxy *Proxy) error {
queryMetaRR := new(dns.TXT)
queryMetaRR.Hdr = dns.RR_Header{Name: ".", Rrtype: dns.TypeTXT,
Class: dns.ClassINET, Ttl: 86400}
queryMetaRR.Hdr = dns.RR_Header{
Name: ".", Rrtype: dns.TypeTXT,
Class: dns.ClassINET, Ttl: 86400,
}
queryMetaRR.Txt = proxy.queryMeta
plugin.queryMetaRR = queryMetaRR
return nil

View File

@ -189,11 +189,11 @@ func parseBlockedQueryResponse(blockedResponse string, pluginsGlobals *PluginsGl
if strings.HasPrefix(blockedResponse, "a:") {
blockedIPStrings := strings.Split(blockedResponse, ",")
(*pluginsGlobals).respondWithIPv4 = net.ParseIP(strings.TrimPrefix(blockedIPStrings[0], "a:"))
pluginsGlobals.respondWithIPv4 = net.ParseIP(strings.TrimPrefix(blockedIPStrings[0], "a:"))
if (*pluginsGlobals).respondWithIPv4 == nil {
if pluginsGlobals.respondWithIPv4 == nil {
dlog.Notice("Error parsing IPv4 response given in blocked_query_response option, defaulting to `hinfo`")
(*pluginsGlobals).refusedCodeInResponses = false
pluginsGlobals.refusedCodeInResponses = false
return
}
@ -203,28 +203,30 @@ func parseBlockedQueryResponse(blockedResponse string, pluginsGlobals *PluginsGl
if strings.HasPrefix(ipv6Response, "[") {
ipv6Response = strings.Trim(ipv6Response, "[]")
}
(*pluginsGlobals).respondWithIPv6 = net.ParseIP(ipv6Response)
pluginsGlobals.respondWithIPv6 = net.ParseIP(ipv6Response)
if (*pluginsGlobals).respondWithIPv6 == nil {
dlog.Notice("Error parsing IPv6 response given in blocked_query_response option, defaulting to IPv4")
if pluginsGlobals.respondWithIPv6 == nil {
dlog.Notice(
"Error parsing IPv6 response given in blocked_query_response option, defaulting to IPv4",
)
}
} else {
dlog.Noticef("Invalid IPv6 response given in blocked_query_response option [%s], the option should take the form 'a:<IPv4>,aaaa:<IPv6>'", blockedIPStrings[1])
}
}
if (*pluginsGlobals).respondWithIPv6 == nil {
(*pluginsGlobals).respondWithIPv6 = (*pluginsGlobals).respondWithIPv4
if pluginsGlobals.respondWithIPv6 == nil {
pluginsGlobals.respondWithIPv6 = pluginsGlobals.respondWithIPv4
}
} else {
switch blockedResponse {
case "refused":
(*pluginsGlobals).refusedCodeInResponses = true
pluginsGlobals.refusedCodeInResponses = true
case "hinfo":
(*pluginsGlobals).refusedCodeInResponses = false
pluginsGlobals.refusedCodeInResponses = false
default:
dlog.Noticef("Invalid blocked_query_response option [%s], defaulting to `hinfo`", blockedResponse)
(*pluginsGlobals).refusedCodeInResponses = false
pluginsGlobals.refusedCodeInResponses = false
}
}
}
@ -238,7 +240,13 @@ type Plugin interface {
Eval(pluginsState *PluginsState, msg *dns.Msg) error
}
func NewPluginsState(proxy *Proxy, clientProto string, clientAddr *net.Addr, serverProto string, start time.Time) PluginsState {
func NewPluginsState(
proxy *Proxy,
clientProto string,
clientAddr *net.Addr,
serverProto string,
start time.Time,
) PluginsState {
return PluginsState{
action: PluginsActionContinue,
returnCode: PluginsReturnCodePass,
@ -262,7 +270,11 @@ func NewPluginsState(proxy *Proxy, clientProto string, clientAddr *net.Addr, ser
}
}
func (pluginsState *PluginsState) ApplyQueryPlugins(pluginsGlobals *PluginsGlobals, packet []byte, needsEDNS0Padding bool) ([]byte, error) {
func (pluginsState *PluginsState) ApplyQueryPlugins(
pluginsGlobals *PluginsGlobals,
packet []byte,
needsEDNS0Padding bool,
) ([]byte, error) {
msg := dns.Msg{}
if err := msg.Unpack(packet); err != nil {
return packet, err
@ -288,7 +300,13 @@ func (pluginsState *PluginsState) ApplyQueryPlugins(pluginsGlobals *PluginsGloba
return packet, err
}
if pluginsState.action == PluginsActionReject {
synth := RefusedResponseFromMessage(&msg, pluginsGlobals.refusedCodeInResponses, pluginsGlobals.respondWithIPv4, pluginsGlobals.respondWithIPv6, pluginsState.rejectTTL)
synth := RefusedResponseFromMessage(
&msg,
pluginsGlobals.refusedCodeInResponses,
pluginsGlobals.respondWithIPv4,
pluginsGlobals.respondWithIPv6,
pluginsState.rejectTTL,
)
pluginsState.synthResponse = synth
}
if pluginsState.action != PluginsActionContinue {
@ -309,7 +327,11 @@ func (pluginsState *PluginsState) ApplyQueryPlugins(pluginsGlobals *PluginsGloba
return packet2, nil
}
func (pluginsState *PluginsState) ApplyResponsePlugins(pluginsGlobals *PluginsGlobals, packet []byte, ttl *uint32) ([]byte, error) {
func (pluginsState *PluginsState) ApplyResponsePlugins(
pluginsGlobals *PluginsGlobals,
packet []byte,
ttl *uint32,
) ([]byte, error) {
msg := dns.Msg{Compress: true}
if err := msg.Unpack(packet); err != nil {
if len(packet) >= MinDNSPacketSize && HasTCFlag(packet) {
@ -336,7 +358,13 @@ func (pluginsState *PluginsState) ApplyResponsePlugins(pluginsGlobals *PluginsGl
return packet, err
}
if pluginsState.action == PluginsActionReject {
synth := RefusedResponseFromMessage(&msg, pluginsGlobals.refusedCodeInResponses, pluginsGlobals.respondWithIPv4, pluginsGlobals.respondWithIPv6, pluginsState.rejectTTL)
synth := RefusedResponseFromMessage(
&msg,
pluginsGlobals.refusedCodeInResponses,
pluginsGlobals.respondWithIPv4,
pluginsGlobals.respondWithIPv6,
pluginsState.rejectTTL,
)
pluginsState.synthResponse = synth
}
if pluginsState.action != PluginsActionContinue {

View File

@ -15,8 +15,7 @@ import (
)
func (proxy *Proxy) dropPrivilege(userStr string, fds []*os.File) {
currentUser, err := user.Current()
if err != nil && currentUser.Uid != "0" {
if os.Geteuid() != 0 {
dlog.Fatal("Root privileges are required in order to switch to a different user. Maybe try again with 'sudo'")
}
userInfo, err := user.Lookup(userStr)
@ -25,9 +24,19 @@ func (proxy *Proxy) dropPrivilege(userStr string, fds []*os.File) {
if err != nil {
uid, err2 := strconv.Atoi(userStr)
if err2 != nil || uid <= 0 {
dlog.Fatalf("Unable to retrieve any information about user [%s]: [%s] - Remove the user_name directive from the configuration file in order to avoid identity switch", userStr, err)
dlog.Fatalf(
"Unable to retrieve any information about user [%s]: [%s] - Remove the user_name directive from the configuration file in order to avoid identity switch",
userStr,
err,
)
}
dlog.Warnf("Unable to retrieve any information about user [%s]: [%s] - Switching to user id [%v] with the same group id, as [%v] looks like a user id. But you should remove or fix the user_name directive in the configuration file if possible", userStr, err, uid, uid)
dlog.Warnf(
"Unable to retrieve any information about user [%s]: [%s] - Switching to user id [%v] with the same group id, as [%v] looks like a user id. But you should remove or fix the user_name directive in the configuration file if possible",
userStr,
err,
uid,
uid,
)
userInfo = &user.User{Uid: userStr, Gid: userStr}
}
uid, err := strconv.Atoi(userInfo.Uid)

View File

@ -17,8 +17,7 @@ import (
)
func (proxy *Proxy) dropPrivilege(userStr string, fds []*os.File) {
currentUser, err := user.Current()
if err != nil && currentUser.Uid != "0" {
if os.Geteuid() != 0 {
dlog.Fatal("Root privileges are required in order to switch to a different user. Maybe try again with 'sudo'")
}
userInfo, err := user.Lookup(userStr)
@ -27,9 +26,19 @@ func (proxy *Proxy) dropPrivilege(userStr string, fds []*os.File) {
if err != nil {
uid, err2 := strconv.Atoi(userStr)
if err2 != nil || uid <= 0 {
dlog.Fatalf("Unable to retrieve any information about user [%s]: [%s] - Remove the user_name directive from the configuration file in order to avoid identity switch", userStr, err)
dlog.Fatalf(
"Unable to retrieve any information about user [%s]: [%s] - Remove the user_name directive from the configuration file in order to avoid identity switch",
userStr,
err,
)
}
dlog.Warnf("Unable to retrieve any information about user [%s]: [%s] - Switching to user id [%v] with the same group id, as [%v] looks like a user id. But you should remove or fix the user_name directive in the configuration file if possible", userStr, err, uid, uid)
dlog.Warnf(
"Unable to retrieve any information about user [%s]: [%s] - Switching to user id [%v] with the same group id, as [%v] looks like a user id. But you should remove or fix the user_name directive in the configuration file if possible",
userStr,
err,
uid,
uid,
)
userInfo = &user.User{Uid: userStr, Gid: userStr}
}
uid, err := strconv.Atoi(userInfo.Uid)

View File

@ -68,9 +68,13 @@ type Proxy struct {
nxLogFile string
proxySecretKey [32]byte
proxyPublicKey [32]byte
ServerNames []string
DisabledServerNames []string
requiredProps stamps.ServerInformalProperties
certRefreshDelayAfterFailure time.Duration
timeout time.Duration
certRefreshDelay time.Duration
certRefreshConcurrency int
cacheSize int
logMaxBackups int
logMaxAge int
@ -83,6 +87,7 @@ type Proxy struct {
cacheMinTTL uint32
cacheNegMaxTTL uint32
cloakTTL uint32
cloakedPTR bool
cache bool
pluginBlockIPv6 bool
ephemeralKeys bool
@ -93,9 +98,6 @@ type Proxy struct {
anonDirectCertFallback bool
pluginBlockUndelegated bool
child bool
requiredProps stamps.ServerInformalProperties
ServerNames []string
DisabledServerNames []string
SourceIPv4 bool
SourceIPv6 bool
SourceDNSCrypt bool
@ -116,11 +118,18 @@ func (proxy *Proxy) registerLocalDoHListener(listener *net.TCPListener) {
}
func (proxy *Proxy) addDNSListener(listenAddrStr string) {
listenUDPAddr, err := net.ResolveUDPAddr("udp", listenAddrStr)
udp := "udp"
tcp := "tcp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
udp = "udp4"
tcp = "tcp4"
}
listenUDPAddr, err := net.ResolveUDPAddr(udp, listenAddrStr)
if err != nil {
dlog.Fatal(err)
}
listenTCPAddr, err := net.ResolveTCPAddr("tcp", listenAddrStr)
listenTCPAddr, err := net.ResolveTCPAddr(tcp, listenAddrStr)
if err != nil {
dlog.Fatal(err)
}
@ -139,11 +148,11 @@ func (proxy *Proxy) addDNSListener(listenAddrStr string) {
// if 'userName' is set and we are the parent process
if !proxy.child {
// parent
listenerUDP, err := net.ListenUDP("udp", listenUDPAddr)
listenerUDP, err := net.ListenUDP(udp, listenUDPAddr)
if err != nil {
dlog.Fatal(err)
}
listenerTCP, err := net.ListenTCP("tcp", listenTCPAddr)
listenerTCP, err := net.ListenTCP(tcp, listenTCPAddr)
if err != nil {
dlog.Fatal(err)
}
@ -184,7 +193,12 @@ func (proxy *Proxy) addDNSListener(listenAddrStr string) {
}
func (proxy *Proxy) addLocalDoHListener(listenAddrStr string) {
listenTCPAddr, err := net.ResolveTCPAddr("tcp", listenAddrStr)
network := "tcp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
network = "tcp4"
}
listenTCPAddr, err := net.ResolveTCPAddr(network, listenAddrStr)
if err != nil {
dlog.Fatal(err)
}
@ -200,7 +214,7 @@ func (proxy *Proxy) addLocalDoHListener(listenAddrStr string) {
// if 'userName' is set and we are the parent process
if !proxy.child {
// parent
listenerTCP, err := net.ListenTCP("tcp", listenTCPAddr)
listenerTCP, err := net.ListenTCP(network, listenTCPAddr)
if err != nil {
dlog.Fatal(err)
}
@ -232,6 +246,17 @@ func (proxy *Proxy) StartProxy() {
}
curve25519.ScalarBaseMult(&proxy.proxyPublicKey, &proxy.proxySecretKey)
proxy.startAcceptingClients()
if !proxy.child {
// Notify the service manager that dnscrypt-proxy is ready. dnscrypt-proxy manages itself in case
// servers are not immediately live/reachable. The service manager may assume it is initialized and
// functioning properly. Note that the service manager 'Ready' signal is delayed if netprobe
// cannot reach the internet during start-up.
if err := ServiceManagerReadyNotify(); err != nil {
dlog.Fatal(err)
}
}
proxy.xTransport.internalResolverReady = false
proxy.xTransport.internalResolvers = proxy.listenAddresses
liveServers, err := proxy.serversInfo.refresh(proxy)
if liveServers > 0 {
proxy.certIgnoreTimestamp = false
@ -241,11 +266,6 @@ func (proxy *Proxy) StartProxy() {
}
if liveServers > 0 {
dlog.Noticef("dnscrypt-proxy is ready - live servers: %d", liveServers)
if !proxy.child {
if err := ServiceManagerReadyNotify(); err != nil {
dlog.Fatal(err)
}
}
} else if err != nil {
dlog.Error(err)
dlog.Notice("dnscrypt-proxy is waiting for at least one server to be reachable")
@ -283,10 +303,16 @@ func (proxy *Proxy) updateRegisteredServers() error {
dlog.Criticalf("Unable to use source [%s]: [%s]", source.name, err)
return err
}
dlog.Warnf("Error in source [%s]: [%s] -- Continuing with reduced server count [%d]", source.name, err, len(registeredServers))
dlog.Warnf(
"Error in source [%s]: [%s] -- Continuing with reduced server count [%d]",
source.name,
err,
len(registeredServers),
)
}
for _, registeredServer := range registeredServers {
if registeredServer.stamp.Proto != stamps.StampProtoTypeDNSCryptRelay && registeredServer.stamp.Proto != stamps.StampProtoTypeODoHRelay {
if registeredServer.stamp.Proto != stamps.StampProtoTypeDNSCryptRelay &&
registeredServer.stamp.Proto != stamps.StampProtoTypeODoHRelay {
if len(proxy.ServerNames) > 0 {
if !includesName(proxy.ServerNames, registeredServer.name) {
continue
@ -310,13 +336,19 @@ func (proxy *Proxy) updateRegisteredServers() error {
continue
}
}
if registeredServer.stamp.Proto == stamps.StampProtoTypeDNSCryptRelay || registeredServer.stamp.Proto == stamps.StampProtoTypeODoHRelay {
if registeredServer.stamp.Proto == stamps.StampProtoTypeDNSCryptRelay ||
registeredServer.stamp.Proto == stamps.StampProtoTypeODoHRelay {
var found bool
for i, currentRegisteredRelay := range proxy.registeredRelays {
if currentRegisteredRelay.name == registeredServer.name {
found = true
if currentRegisteredRelay.stamp.String() != registeredServer.stamp.String() {
dlog.Infof("Updating stamp for [%s] was: %s now: %s", registeredServer.name, currentRegisteredRelay.stamp.String(), registeredServer.stamp.String())
dlog.Infof(
"Updating stamp for [%s] was: %s now: %s",
registeredServer.name,
currentRegisteredRelay.stamp.String(),
registeredServer.stamp.String(),
)
proxy.registeredRelays[i].stamp = registeredServer.stamp
dlog.Debugf("Total count of registered relays %v", len(proxy.registeredRelays))
}
@ -370,7 +402,15 @@ func (proxy *Proxy) udpListener(clientPc *net.UDPConn) {
packet := buffer[:length]
if !proxy.clientsCountInc() {
dlog.Warnf("Too many incoming connections (max=%d)", proxy.maxClients)
proxy.processIncomingQuery("udp", proxy.mainProto, packet, &clientAddr, clientPc, time.Now(), true) // respond synchronously, but only to cached/synthesized queries
proxy.processIncomingQuery(
"udp",
proxy.mainProto,
packet,
&clientAddr,
clientPc,
time.Now(),
true,
) // respond synchronously, but only to cached/synthesized queries
continue
}
go func() {
@ -414,7 +454,13 @@ func (proxy *Proxy) udpListenerFromAddr(listenAddr *net.UDPAddr) error {
if err != nil {
return err
}
clientPc, err := listenConfig.ListenPacket(context.Background(), "udp", listenAddr.String())
listenAddrStr := listenAddr.String()
network := "udp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
network = "udp4"
}
clientPc, err := listenConfig.ListenPacket(context.Background(), network, listenAddrStr)
if err != nil {
return err
}
@ -428,7 +474,13 @@ func (proxy *Proxy) tcpListenerFromAddr(listenAddr *net.TCPAddr) error {
if err != nil {
return err
}
acceptPc, err := listenConfig.Listen(context.Background(), "tcp", listenAddr.String())
listenAddrStr := listenAddr.String()
network := "tcp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
network = "tcp4"
}
acceptPc, err := listenConfig.Listen(context.Background(), network, listenAddrStr)
if err != nil {
return err
}
@ -442,7 +494,13 @@ func (proxy *Proxy) localDoHListenerFromAddr(listenAddr *net.TCPAddr) error {
if err != nil {
return err
}
acceptPc, err := listenConfig.Listen(context.Background(), "tcp", listenAddr.String())
listenAddrStr := listenAddr.String()
network := "tcp"
isIPv4 := isDigit(listenAddrStr[0])
if isIPv4 {
network = "tcp4"
}
acceptPc, err := listenConfig.Listen(context.Background(), network, listenAddrStr)
if err != nil {
return err
}
@ -476,7 +534,12 @@ func (proxy *Proxy) prepareForRelay(ip net.IP, port int, encryptedQuery *[]byte)
*encryptedQuery = relayedQuery
}
func (proxy *Proxy) exchangeWithUDPServer(serverInfo *ServerInfo, sharedKey *[32]byte, encryptedQuery []byte, clientNonce []byte) ([]byte, error) {
func (proxy *Proxy) exchangeWithUDPServer(
serverInfo *ServerInfo,
sharedKey *[32]byte,
encryptedQuery []byte,
clientNonce []byte,
) ([]byte, error) {
upstreamAddr := serverInfo.UDPAddr
if serverInfo.Relay != nil && serverInfo.Relay.Dnscrypt != nil {
upstreamAddr = serverInfo.Relay.Dnscrypt.RelayUDPAddr
@ -485,7 +548,7 @@ func (proxy *Proxy) exchangeWithUDPServer(serverInfo *ServerInfo, sharedKey *[32
var pc net.Conn
proxyDialer := proxy.xTransport.proxyDialer
if proxyDialer == nil {
pc, err = net.DialUDP("udp", nil, upstreamAddr)
pc, err = net.DialTimeout("udp", upstreamAddr.String(), serverInfo.Timeout)
} else {
pc, err = (*proxyDialer).Dial("udp", upstreamAddr.String())
}
@ -514,7 +577,12 @@ func (proxy *Proxy) exchangeWithUDPServer(serverInfo *ServerInfo, sharedKey *[32
return proxy.Decrypt(serverInfo, sharedKey, encryptedResponse, clientNonce)
}
func (proxy *Proxy) exchangeWithTCPServer(serverInfo *ServerInfo, sharedKey *[32]byte, encryptedQuery []byte, clientNonce []byte) ([]byte, error) {
func (proxy *Proxy) exchangeWithTCPServer(
serverInfo *ServerInfo,
sharedKey *[32]byte,
encryptedQuery []byte,
clientNonce []byte,
) ([]byte, error) {
upstreamAddr := serverInfo.TCPAddr
if serverInfo.Relay != nil && serverInfo.Relay.Dnscrypt != nil {
upstreamAddr = serverInfo.Relay.Dnscrypt.RelayTCPAddr
@ -523,7 +591,7 @@ func (proxy *Proxy) exchangeWithTCPServer(serverInfo *ServerInfo, sharedKey *[32
var pc net.Conn
proxyDialer := proxy.xTransport.proxyDialer
if proxyDialer == nil {
pc, err = net.DialTCP("tcp", nil, upstreamAddr)
pc, err = net.DialTimeout("tcp", upstreamAddr.String(), serverInfo.Timeout)
} else {
pc, err = (*proxyDialer).Dial("tcp", upstreamAddr.String())
}
@ -566,14 +634,23 @@ func (proxy *Proxy) clientsCountInc() bool {
func (proxy *Proxy) clientsCountDec() {
for {
if count := atomic.LoadUint32(&proxy.clientsCount); count == 0 || atomic.CompareAndSwapUint32(&proxy.clientsCount, count, count-1) {
if count := atomic.LoadUint32(&proxy.clientsCount); count == 0 ||
atomic.CompareAndSwapUint32(&proxy.clientsCount, count, count-1) {
break
}
}
}
func (proxy *Proxy) processIncomingQuery(clientProto string, serverProto string, query []byte, clientAddr *net.Addr, clientPc net.Conn, start time.Time, onlyCached bool) []byte {
var response []byte = nil
func (proxy *Proxy) processIncomingQuery(
clientProto string,
serverProto string,
query []byte,
clientAddr *net.Addr,
clientPc net.Conn,
start time.Time,
onlyCached bool,
) []byte {
var response []byte
if len(query) < MinDNSPacketSize {
return response
}
@ -773,7 +850,7 @@ func (proxy *Proxy) processIncomingQuery(clientProto string, serverProto string,
if pluginsState.dnssec {
dlog.Debug("A response had an invalid DNSSEC signature")
} else {
dlog.Infof("Server [%v] returned temporary error code SERVFAIL -- Invalid DNSSEC signature received or server may be experiencing connectivity issues", serverInfo.Name)
dlog.Infof("A response with status code 2 was received - this is usually a temporary, remote issue with the configuration of the domain name")
serverInfo.noticeFailure(proxy)
}
} else {

View File

@ -11,10 +11,12 @@ import (
"github.com/miekg/dns"
)
const myResolverHost string = "resolver.dnscrypt.info."
const nonexistentName string = "nonexistent-zone.dnscrypt-test."
const (
myResolverHost string = "resolver.dnscrypt.info."
nonexistentName string = "nonexistent-zone.dnscrypt-test."
)
func resolveQuery(server string, qName string, qType uint16) (*dns.Msg, error) {
func resolveQuery(server string, qName string, qType uint16, sendClientSubnet bool) (*dns.Msg, error) {
client := new(dns.Client)
client.ReadTimeout = 2 * time.Second
msg := &dns.Msg{
@ -30,9 +32,27 @@ func resolveQuery(server string, qName string, qType uint16) (*dns.Msg, error) {
Rrtype: dns.TypeOPT,
},
}
if sendClientSubnet {
subnet := net.IPNet{IP: net.IPv4(93, 184, 216, 0), Mask: net.CIDRMask(24, 32)}
prr := dns.EDNS0_SUBNET{}
prr.Code = dns.EDNS0SUBNET
bits, totalSize := subnet.Mask.Size()
if totalSize == 32 {
prr.Family = 1
} else if totalSize == 128 { // if we want to test with IPv6
prr.Family = 2
}
prr.SourceNetmask = uint8(bits)
prr.SourceScope = 0
prr.Address = subnet.IP
options.Option = append(options.Option, &prr)
}
msg.Extra = append(msg.Extra, options)
options.SetDo()
options.SetUDPSize(uint16(MaxDNSPacketSize))
msg.Question[0] = dns.Question{Name: qName, Qtype: qType, Qclass: dns.ClassINET}
msg.Id = dns.Id()
for i := 0; i < 3; i++ {
@ -69,9 +89,10 @@ func Resolve(server string, name string, singleResolver bool) {
name = dns.Fqdn(name)
cname := name
var clientSubnet string
for once := true; once; once = false {
response, err := resolveQuery(server, myResolverHost, dns.TypeA)
response, err := resolveQuery(server, myResolverHost, dns.TypeTXT, true)
if err != nil {
fmt.Printf("Unable to resolve: [%s]\n", err)
os.Exit(1)
@ -79,17 +100,22 @@ func Resolve(server string, name string, singleResolver bool) {
fmt.Printf("Resolver : ")
res := make([]string, 0)
for _, answer := range response.Answer {
if answer.Header().Class != dns.ClassINET {
if answer.Header().Class != dns.ClassINET || answer.Header().Rrtype != dns.TypeTXT {
continue
}
var ip string
if answer.Header().Rrtype == dns.TypeA {
ip = answer.(*dns.A).A.String()
} else if answer.Header().Rrtype == dns.TypeAAAA {
ip = answer.(*dns.AAAA).AAAA.String()
for _, txt := range answer.(*dns.TXT).Txt {
if strings.HasPrefix(txt, "Resolver IP: ") {
ip = strings.TrimPrefix(txt, "Resolver IP: ")
} else if strings.HasPrefix(txt, "EDNS0 client subnet: ") {
clientSubnet = strings.TrimPrefix(txt, "EDNS0 client subnet: ")
}
}
if ip == "" {
continue
}
if rev, err := dns.ReverseAddr(ip); err == nil {
response, err = resolveQuery(server, rev, dns.TypePTR)
response, err = resolveQuery(server, rev, dns.TypePTR, false)
if err != nil {
break
}
@ -113,7 +139,7 @@ func Resolve(server string, name string, singleResolver bool) {
if singleResolver {
for once := true; once; once = false {
fmt.Printf("Lying : ")
response, err := resolveQuery(server, nonexistentName, dns.TypeA)
response, err := resolveQuery(server, nonexistentName, dns.TypeA, false)
if err != nil {
break
}
@ -133,6 +159,13 @@ func Resolve(server string, name string, singleResolver bool) {
fmt.Println("no, the resolver doesn't support DNSSEC")
}
}
fmt.Printf("ECS : ")
if clientSubnet != "" {
fmt.Println("client network address is sent to authoritative servers")
} else {
fmt.Println("ignored or selective")
}
}
}
@ -142,7 +175,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("Canonical name: ")
for i := 0; i < 100; i++ {
response, err := resolveQuery(server, cname, dns.TypeCNAME)
response, err := resolveQuery(server, cname, dns.TypeCNAME, false)
if err != nil {
break cname
}
@ -166,7 +199,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("IPv4 addresses: ")
response, err := resolveQuery(server, cname, dns.TypeA)
response, err := resolveQuery(server, cname, dns.TypeA, false)
if err != nil {
break
}
@ -186,7 +219,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("IPv6 addresses: ")
response, err := resolveQuery(server, cname, dns.TypeAAAA)
response, err := resolveQuery(server, cname, dns.TypeAAAA, false)
if err != nil {
break
}
@ -208,7 +241,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("Name servers : ")
response, err := resolveQuery(server, cname, dns.TypeNS)
response, err := resolveQuery(server, cname, dns.TypeNS, false)
if err != nil {
break
}
@ -238,7 +271,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("Mail servers : ")
response, err := resolveQuery(server, cname, dns.TypeMX)
response, err := resolveQuery(server, cname, dns.TypeMX, false)
if err != nil {
break
}
@ -262,7 +295,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("HTTPS alias : ")
response, err := resolveQuery(server, cname, dns.TypeHTTPS)
response, err := resolveQuery(server, cname, dns.TypeHTTPS, false)
if err != nil {
break
}
@ -308,7 +341,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("Host info : ")
response, err := resolveQuery(server, cname, dns.TypeHINFO)
response, err := resolveQuery(server, cname, dns.TypeHINFO, false)
if err != nil {
break
}
@ -328,7 +361,7 @@ cname:
for once := true; once; once = false {
fmt.Printf("TXT records : ")
response, err := resolveQuery(server, cname, dns.TypeTXT)
response, err := resolveQuery(server, cname, dns.TypeTXT, false)
if err != nil {
break
}

View File

@ -67,6 +67,7 @@ type ServerInfo struct {
type LBStrategy interface {
getCandidate(serversCount int) int
getActiveCount(serversCount int) int
}
type LBStrategyP2 struct{}
@ -75,30 +76,50 @@ func (LBStrategyP2) getCandidate(serversCount int) int {
return rand.Intn(Min(serversCount, 2))
}
func (LBStrategyP2) getActiveCount(serversCount int) int {
return Min(serversCount, 2)
}
type LBStrategyPN struct{ n int }
func (s LBStrategyPN) getCandidate(serversCount int) int {
return rand.Intn(Min(serversCount, s.n))
}
func (s LBStrategyPN) getActiveCount(serversCount int) int {
return Min(serversCount, s.n)
}
type LBStrategyPH struct{}
func (LBStrategyPH) getCandidate(serversCount int) int {
return rand.Intn(Max(Min(serversCount, 2), serversCount/2))
}
func (LBStrategyPH) getActiveCount(serversCount int) int {
return Max(Min(serversCount, 2), serversCount/2)
}
type LBStrategyFirst struct{}
func (LBStrategyFirst) getCandidate(int) int {
return 0
}
func (LBStrategyFirst) getActiveCount(int) int {
return 1
}
type LBStrategyRandom struct{}
func (LBStrategyRandom) getCandidate(serversCount int) int {
return rand.Intn(serversCount)
}
func (LBStrategyRandom) getActiveCount(serversCount int) int {
return serversCount
}
var DefaultLBStrategy = LBStrategyP2{}
type DNSCryptRelay struct {
@ -126,7 +147,12 @@ type ServersInfo struct {
}
func NewServersInfo() ServersInfo {
return ServersInfo{lbStrategy: DefaultLBStrategy, lbEstimator: true, registeredServers: make([]RegisteredServer, 0), registeredRelays: make([]RegisteredServer, 0)}
return ServersInfo{
lbStrategy: DefaultLBStrategy,
lbEstimator: true,
registeredServers: make([]RegisteredServer, 0),
registeredRelays: make([]RegisteredServer, 0),
}
}
func (serversInfo *ServersInfo) registerServer(name string, stamp stamps.ServerStamp) {
@ -197,15 +223,35 @@ func (serversInfo *ServersInfo) refreshServer(proxy *Proxy, name string, stamp s
func (serversInfo *ServersInfo) refresh(proxy *Proxy) (int, error) {
dlog.Debug("Refreshing certificates")
serversInfo.RLock()
registeredServers := serversInfo.registeredServers
// Appending registeredServers slice from sources may allocate new memory.
serversCount := len(serversInfo.registeredServers)
registeredServers := make([]RegisteredServer, serversCount)
copy(registeredServers, serversInfo.registeredServers)
serversInfo.RUnlock()
countChannel := make(chan struct{}, proxy.certRefreshConcurrency)
errorChannel := make(chan error, serversCount)
for i := range registeredServers {
countChannel <- struct{}{}
go func(registeredServer *RegisteredServer) {
err := serversInfo.refreshServer(proxy, registeredServer.name, registeredServer.stamp)
if err == nil {
proxy.xTransport.internalResolverReady = true
}
errorChannel <- err
<-countChannel
}(&registeredServers[i])
}
liveServers := 0
var err error
for _, registeredServer := range registeredServers {
if err = serversInfo.refreshServer(proxy, registeredServer.name, registeredServer.stamp); err == nil {
for i := 0; i < serversCount; i++ {
err = <-errorChannel
if err == nil {
liveServers++
}
}
if liveServers > 0 {
err = nil
}
serversInfo.Lock()
sort.SliceStable(serversInfo.inner, func(i, j int) bool {
return serversInfo.inner[i].initialRtt < serversInfo.inner[j].initialRtt
@ -225,31 +271,44 @@ func (serversInfo *ServersInfo) refresh(proxy *Proxy) (int, error) {
return liveServers, err
}
func (serversInfo *ServersInfo) estimatorUpdate() {
func (serversInfo *ServersInfo) estimatorUpdate(currentActive int) {
// serversInfo.RWMutex is assumed to be Locked
candidate := rand.Intn(len(serversInfo.inner))
if candidate == 0 {
serversCount := len(serversInfo.inner)
activeCount := serversInfo.lbStrategy.getActiveCount(serversCount)
if activeCount == serversCount {
return
}
candidateRtt, currentBestRtt := serversInfo.inner[candidate].rtt.Value(), serversInfo.inner[0].rtt.Value()
if currentBestRtt < 0 {
currentBestRtt = candidateRtt
serversInfo.inner[0].rtt.Set(currentBestRtt)
candidate := rand.Intn(serversCount-activeCount) + activeCount
candidateRtt, currentActiveRtt := serversInfo.inner[candidate].rtt.Value(), serversInfo.inner[currentActive].rtt.Value()
if currentActiveRtt < 0 {
currentActiveRtt = candidateRtt
serversInfo.inner[currentActive].rtt.Set(currentActiveRtt)
return
}
partialSort := false
if candidateRtt < currentBestRtt {
serversInfo.inner[candidate], serversInfo.inner[0] = serversInfo.inner[0], serversInfo.inner[candidate]
if candidateRtt < currentActiveRtt {
serversInfo.inner[candidate], serversInfo.inner[currentActive] = serversInfo.inner[currentActive], serversInfo.inner[candidate]
dlog.Debugf(
"New preferred candidate: %s (RTT: %d vs previous: %d)",
serversInfo.inner[currentActive].Name,
int(candidateRtt),
int(currentActiveRtt),
)
partialSort = true
dlog.Debugf("New preferred candidate: %v (rtt: %d vs previous: %d)", serversInfo.inner[0].Name, int(candidateRtt), int(currentBestRtt))
} else if candidateRtt > 0 && candidateRtt >= currentBestRtt*4.0 {
} else if candidateRtt > 0 && candidateRtt >= (serversInfo.inner[0].rtt.Value()+serversInfo.inner[activeCount-1].rtt.Value())/2.0*4.0 {
if time.Since(serversInfo.inner[candidate].lastActionTS) > time.Duration(1*time.Minute) {
serversInfo.inner[candidate].rtt.Add(MinF(MaxF(candidateRtt/2.0, currentBestRtt*2.0), candidateRtt))
dlog.Debugf("Giving a new chance to candidate [%s], lowering its RTT from %d to %d (best: %d)", serversInfo.inner[candidate].Name, int(candidateRtt), int(serversInfo.inner[candidate].rtt.Value()), int(currentBestRtt))
serversInfo.inner[candidate].rtt.Add(candidateRtt / 2.0)
dlog.Debugf(
"Giving a new chance to candidate [%s], lowering its RTT from %d to %d (best: %d)",
serversInfo.inner[candidate].Name,
int(candidateRtt),
int(serversInfo.inner[candidate].rtt.Value()),
int(serversInfo.inner[0].rtt.Value()),
)
partialSort = true
}
}
if partialSort {
serversCount := len(serversInfo.inner)
for i := 1; i < serversCount; i++ {
if serversInfo.inner[i-1].rtt.Value() > serversInfo.inner[i].rtt.Value() {
serversInfo.inner[i-1], serversInfo.inner[i] = serversInfo.inner[i], serversInfo.inner[i-1]
@ -265,12 +324,12 @@ func (serversInfo *ServersInfo) getOne() *ServerInfo {
serversInfo.Unlock()
return nil
}
if serversInfo.lbEstimator {
serversInfo.estimatorUpdate()
}
candidate := serversInfo.lbStrategy.getCandidate(serversCount)
if serversInfo.lbEstimator {
serversInfo.estimatorUpdate(candidate)
}
serverInfo := serversInfo.inner[candidate]
dlog.Debugf("Using candidate [%s] RTT: %d", (*serverInfo).Name, int((*serverInfo).rtt.Value()))
dlog.Debugf("Using candidate [%s] RTT: %d", serverInfo.Name, int(serverInfo.rtt.Value()))
serversInfo.Unlock()
return serverInfo
@ -297,12 +356,13 @@ func findFarthestRoute(proxy *Proxy, name string, relayStamps []stamps.ServerSta
}
}
if serverIdx < 0 {
proxy.serversInfo.RUnlock()
return nil
}
server := proxy.serversInfo.registeredServers[serverIdx]
proxy.serversInfo.RUnlock()
// Fall back to random relays until the logic is implementeed for non-DNSCrypt relays
// Fall back to random relays until the logic is implemented for non-DNSCrypt relays
if server.stamp.Proto == stamps.StampProtoTypeODoHTarget {
candidates := make([]int, 0)
for relayIdx, relayStamp := range relayStamps {
@ -392,16 +452,19 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
return nil, nil
}
relayStamps := make([]stamps.ServerStamp, 0)
relayStampToName := make(map[string]string)
for _, relayName := range relayNames {
if relayStamp, err := stamps.NewServerStampFromString(relayName); err == nil {
if relayStamp.Proto == relayProto {
relayStamps = append(relayStamps, relayStamp)
relayStampToName[relayStamp.String()] = relayName
}
} else if relayName == "*" {
proxy.serversInfo.RLock()
for _, registeredServer := range proxy.serversInfo.registeredRelays {
if registeredServer.stamp.Proto == relayProto {
relayStamps = append(relayStamps, registeredServer.stamp)
relayStampToName[registeredServer.stamp.String()] = registeredServer.name
}
}
proxy.serversInfo.RUnlock()
@ -412,12 +475,7 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
for _, registeredServer := range proxy.serversInfo.registeredRelays {
if registeredServer.name == relayName && registeredServer.stamp.Proto == relayProto {
relayStamps = append(relayStamps, registeredServer.stamp)
break
}
}
for _, registeredServer := range proxy.serversInfo.registeredServers {
if registeredServer.name == relayName && registeredServer.stamp.Proto == relayProto {
relayStamps = append(relayStamps, registeredServer.stamp)
relayStampToName[registeredServer.stamp.String()] = relayName
break
}
}
@ -425,8 +483,8 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
}
}
if len(relayStamps) == 0 {
dlog.Warnf("Empty relay set for [%v]", name)
return nil, nil
err := fmt.Errorf("Non-existent relay set for server [%v]", name)
return nil, err
}
var relayCandidateStamp *stamps.ServerStamp
if !wildcard || len(relayStamps) == 1 {
@ -437,15 +495,7 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
if relayCandidateStamp == nil {
return nil, fmt.Errorf("No valid relay for server [%v]", name)
}
relayName := relayCandidateStamp.ServerAddrStr
proxy.serversInfo.RLock()
for _, registeredServer := range proxy.serversInfo.registeredRelays {
if registeredServer.stamp.ServerAddrStr == relayCandidateStamp.ServerAddrStr {
relayName = registeredServer.name
break
}
}
proxy.serversInfo.RUnlock()
relayName := relayStampToName[relayCandidateStamp.String()]
switch relayCandidateStamp.Proto {
case stamps.StampProtoTypeDNSCrypt, stamps.StampProtoTypeDNSCryptRelay:
relayUDPAddr, err := net.ResolveUDPAddr("udp", relayCandidateStamp.ServerAddrStr)
@ -457,9 +507,14 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
return nil, err
}
dlog.Noticef("Anonymizing queries for [%v] via [%v]", name, relayName)
return &Relay{Proto: stamps.StampProtoTypeDNSCryptRelay, Dnscrypt: &DNSCryptRelay{RelayUDPAddr: relayUDPAddr, RelayTCPAddr: relayTCPAddr}}, nil
return &Relay{
Proto: stamps.StampProtoTypeDNSCryptRelay,
Dnscrypt: &DNSCryptRelay{RelayUDPAddr: relayUDPAddr, RelayTCPAddr: relayTCPAddr},
}, nil
case stamps.StampProtoTypeODoHRelay:
relayBaseURL, err := url.Parse("https://" + url.PathEscape(relayCandidateStamp.ProviderName) + relayCandidateStamp.Path)
relayBaseURL, err := url.Parse(
"https://" + url.PathEscape(relayCandidateStamp.ProviderName) + relayCandidateStamp.Path,
)
if err != nil {
return nil, err
}
@ -496,7 +551,7 @@ func route(proxy *Proxy, name string, serverProto stamps.StampProtoType) (*Relay
func fetchDNSCryptServerInfo(proxy *Proxy, name string, stamp stamps.ServerStamp, isNew bool) (ServerInfo, error) {
if len(stamp.ServerPk) != ed25519.PublicKeySize {
serverPk, err := hex.DecodeString(strings.Replace(string(stamp.ServerPk), ":", "", -1))
serverPk, err := hex.DecodeString(strings.ReplaceAll(string(stamp.ServerPk), ":", ""))
if err != nil || len(serverPk) != ed25519.PublicKeySize {
dlog.Fatalf("Unsupported public key for [%s]: [%s]", name, stamp.ServerPk)
}
@ -519,7 +574,17 @@ func fetchDNSCryptServerInfo(proxy *Proxy, name string, stamp stamps.ServerStamp
if relay != nil {
dnscryptRelay = relay.Dnscrypt
}
certInfo, rtt, fragmentsBlocked, err := FetchCurrentDNSCryptCert(proxy, &name, proxy.mainProto, stamp.ServerPk, stamp.ServerAddrStr, stamp.ProviderName, isNew, dnscryptRelay, knownBugs)
certInfo, rtt, fragmentsBlocked, err := FetchCurrentDNSCryptCert(
proxy,
&name,
proxy.mainProto,
stamp.ServerPk,
stamp.ServerAddrStr,
stamp.ProviderName,
isNew,
dnscryptRelay,
knownBugs,
)
if !knownBugs.fragmentsBlocked && fragmentsBlocked {
dlog.Debugf("[%v] drops fragmented queries", name)
knownBugs.fragmentsBlocked = true
@ -567,7 +632,7 @@ func dohTestPacket(msgID uint16) []byte {
msg.SetEdns0(uint16(MaxDNSPacketSize), false)
ext := new(dns.EDNS0_PADDING)
ext.Padding = make([]byte, 16)
crypto_rand.Read(ext.Padding)
_, _ = crypto_rand.Read(ext.Padding)
edns0 := msg.IsEdns0()
edns0.Option = append(edns0.Option, ext)
body, err := msg.Pack()
@ -590,7 +655,7 @@ func dohNXTestPacket(msgID uint16) []byte {
msg.SetEdns0(uint16(MaxDNSPacketSize), false)
ext := new(dns.EDNS0_PADDING)
ext.Padding = make([]byte, 16)
crypto_rand.Read(ext.Padding)
_, _ = crypto_rand.Read(ext.Padding)
edns0 := msg.IsEdns0()
edns0.Option = append(edns0.Option, ext)
body, err := msg.Pack()
@ -648,7 +713,7 @@ func fetchDoHServerInfo(proxy *Proxy, name string, stamp stamps.ServerStamp, isN
protocol = "http/1.x"
}
if strings.HasPrefix(protocol, "http/1.") {
dlog.Warnf("[%s] does not support HTTP/2", name)
dlog.Warnf("[%s] does not support HTTP/2 nor HTTP/3", name)
}
dlog.Infof("[%s] TLS version: %x - Protocol: %v - Cipher suite: %v", name, tls.Version, protocol, tls.CipherSuite)
showCerts := proxy.showCerts
@ -728,7 +793,10 @@ func _fetchODoHTargetInfo(proxy *Proxy, name string, stamp stamps.ServerStamp, i
}
if relay == nil {
dlog.Criticalf("No relay defined for [%v] - Configuring a relay is required for ODoH servers (see the `[anonymized_dns]` section)", name)
dlog.Criticalf(
"No relay defined for [%v] - Configuring a relay is required for ODoH servers (see the `[anonymized_dns]` section)",
name,
)
return ServerInfo{}, errors.New("No ODoH relay")
} else {
if relay.ODoH == nil {
@ -776,7 +844,12 @@ func _fetchODoHTargetInfo(proxy *Proxy, name string, stamp stamps.ServerStamp, i
continue
}
responseBody, responseCode, tls, rtt, err := proxy.xTransport.ObliviousDoHQuery(useGet, url, odohQuery.odohMessage, proxy.timeout)
responseBody, responseCode, tls, rtt, err := proxy.xTransport.ObliviousDoHQuery(
useGet,
url,
odohQuery.odohMessage,
proxy.timeout,
)
if err != nil {
continue
}
@ -798,42 +871,57 @@ func _fetchODoHTargetInfo(proxy *Proxy, name string, stamp stamps.ServerStamp, i
if msg.Rcode != dns.RcodeNameError {
dlog.Criticalf("[%s] may be a lying resolver", name)
}
protocol := tls.NegotiatedProtocol
if len(protocol) == 0 {
protocol = "http/1.x"
protocol := "http"
tlsVersion := uint16(0)
tlsCipherSuite := uint16(0)
if tls != nil {
protocol = tls.NegotiatedProtocol
if len(protocol) == 0 {
protocol = "http/1.x"
} else {
tlsVersion = tls.Version
tlsCipherSuite = tls.CipherSuite
}
}
if strings.HasPrefix(protocol, "http/1.") {
dlog.Warnf("[%s] does not support HTTP/2", name)
}
dlog.Infof("[%s] TLS version: %x - Protocol: %v - Cipher suite: %v", name, tls.Version, protocol, tls.CipherSuite)
dlog.Infof(
"[%s] TLS version: %x - Protocol: %v - Cipher suite: %v",
name,
tlsVersion,
protocol,
tlsCipherSuite,
)
showCerts := proxy.showCerts
found := false
var wantedHash [32]byte
for _, cert := range tls.PeerCertificates {
h := sha256.Sum256(cert.RawTBSCertificate)
if showCerts {
dlog.Noticef("Advertised relay cert: [%s] [%x]", cert.Subject, h)
} else {
dlog.Debugf("Advertised relay cert: [%s] [%x]", cert.Subject, h)
}
for _, hash := range stamp.Hashes {
if len(hash) == len(wantedHash) {
copy(wantedHash[:], hash)
if h == wantedHash {
found = true
break
if tls != nil {
for _, cert := range tls.PeerCertificates {
h := sha256.Sum256(cert.RawTBSCertificate)
if showCerts {
dlog.Noticef("Advertised relay cert: [%s] [%x]", cert.Subject, h)
} else {
dlog.Debugf("Advertised relay cert: [%s] [%x]", cert.Subject, h)
}
for _, hash := range stamp.Hashes {
if len(hash) == len(wantedHash) {
copy(wantedHash[:], hash)
if h == wantedHash {
found = true
break
}
}
}
if found {
break
}
}
if found {
break
if !found && len(stamp.Hashes) > 0 {
dlog.Criticalf("[%s] Certificate hash [%x] not found", name, wantedHash)
return ServerInfo{}, fmt.Errorf("Certificate hash not found")
}
}
if !found && len(stamp.Hashes) > 0 {
dlog.Criticalf("[%s] Certificate hash [%x] not found", name, wantedHash)
return ServerInfo{}, fmt.Errorf("Certificate hash not found")
}
if len(serverResponse) < MinDNSPacketSize || len(serverResponse) > MaxDNSPacketSize ||
serverResponse[0] != 0xca || serverResponse[1] != 0xfe || serverResponse[4] != 0x00 || serverResponse[5] != 0x01 {
dlog.Info("Webserver returned an unexpected response")

View File

@ -8,13 +8,15 @@ import (
clocksmith "github.com/jedisct1/go-clocksmith"
)
const SdNotifyStatus = "STATUS="
func ServiceManagerStartNotify() error {
daemon.SdNotify(false, "STATUS=Starting")
daemon.SdNotify(false, SdNotifyStatus+"Starting...")
return nil
}
func ServiceManagerReadyNotify() error {
daemon.SdNotify(false, "READY=1")
daemon.SdNotify(false, daemon.SdNotifyReady+"\n"+SdNotifyStatus+"Ready")
return systemDWatchdog()
}
@ -26,10 +28,9 @@ func systemDWatchdog() error {
refreshInterval := watchdogFailureDelay / 3
go func() {
for {
daemon.SdNotify(false, "WATCHDOG=1")
daemon.SdNotify(false, daemon.SdNotifyWatchdog)
clocksmith.Sleep(refreshInterval)
}
}()
return nil
}

View File

@ -12,7 +12,12 @@ func (proxy *Proxy) udpListenerConfig() (*net.ListenConfig, error) {
_ = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_IP, syscall.IP_FREEBIND, 1)
_ = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_IP, syscall.IP_DF, 0)
_ = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_IP, syscall.IP_TOS, 0x70)
_ = syscall.SetsockoptInt(int(fd), syscall.IPPROTO_IP, syscall.IP_MTU_DISCOVER, syscall.IP_PMTUDISC_DONT)
_ = syscall.SetsockoptInt(
int(fd),
syscall.IPPROTO_IP,
syscall.IP_MTU_DISCOVER,
syscall.IP_PMTUDISC_DONT,
)
_ = syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_RCVBUFFORCE, 4096)
_ = syscall.SetsockoptInt(int(fd), syscall.SOL_SOCKET, syscall.SO_SNDBUFFORCE, 4096)
})

View File

@ -3,7 +3,6 @@ package main
import (
"bytes"
"fmt"
"io/ioutil"
"math/rand"
"net/url"
"os"
@ -33,7 +32,7 @@ type Source struct {
name string
urls []*url.URL
format SourceFormat
in []byte
bin []byte
minisignKey *minisign.PublicKey
cacheFile string
cacheTTL, prefetchDelay time.Duration
@ -41,83 +40,84 @@ type Source struct {
prefix string
}
func (source *Source) checkSignature(bin, sig []byte) (err error) {
var signature minisign.Signature
if signature, err = minisign.DecodeSignature(string(sig)); err == nil {
// timeNow() is replaced by tests to provide a static value
var timeNow = time.Now
func (source *Source) checkSignature(bin, sig []byte) error {
signature, err := minisign.DecodeSignature(string(sig))
if err == nil {
_, err = source.minisignKey.Verify(bin, signature)
}
return err
}
// timeNow can be replaced by tests to provide a static value
var timeNow = time.Now
func (source *Source) fetchFromCache(now time.Time) (delay time.Duration, err error) {
func (source *Source) fetchFromCache(now time.Time) (time.Duration, error) {
var err error
var bin, sig []byte
if bin, err = ioutil.ReadFile(source.cacheFile); err != nil {
return
if bin, err = os.ReadFile(source.cacheFile); err != nil {
return 0, err
}
if sig, err = ioutil.ReadFile(source.cacheFile + ".minisig"); err != nil {
return
if sig, err = os.ReadFile(source.cacheFile + ".minisig"); err != nil {
return 0, err
}
if err = source.checkSignature(bin, sig); err != nil {
return
return 0, err
}
source.in = bin
source.bin = bin
var fi os.FileInfo
if fi, err = os.Stat(source.cacheFile); err != nil {
return
return 0, err
}
var ttl time.Duration = 0
if elapsed := now.Sub(fi.ModTime()); elapsed < source.cacheTTL {
delay = source.prefetchDelay - elapsed
dlog.Debugf("Source [%s] cache file [%s] is still fresh, next update: %v", source.name, source.cacheFile, delay)
ttl = source.prefetchDelay - elapsed
dlog.Debugf("Source [%s] cache file [%s] is still fresh, next update: %v", source.name, source.cacheFile, ttl)
} else {
dlog.Debugf("Source [%s] cache file [%s] needs to be refreshed", source.name, source.cacheFile)
}
return
return ttl, nil
}
func writeSource(f string, bin, sig []byte) (err error) {
func writeSource(f string, bin, sig []byte) error {
var err error
var fSrc, fSig *safefile.File
if fSrc, err = safefile.Create(f, 0644); err != nil {
return
if fSrc, err = safefile.Create(f, 0o644); err != nil {
return err
}
defer fSrc.Close()
if fSig, err = safefile.Create(f+".minisig", 0644); err != nil {
return
if fSig, err = safefile.Create(f+".minisig", 0o644); err != nil {
return err
}
defer fSig.Close()
if _, err = fSrc.Write(bin); err != nil {
return
return err
}
if _, err = fSig.Write(sig); err != nil {
return
return err
}
if err = fSrc.Commit(); err != nil {
return
return err
}
return fSig.Commit()
}
func (source *Source) writeToCache(bin, sig []byte, now time.Time) {
f := source.cacheFile
var writeErr error // an error writing cache isn't fatal
defer func() {
source.in = bin
if writeErr == nil {
return
}
if absPath, absErr := filepath.Abs(f); absErr == nil {
f = absPath
}
dlog.Warnf("%s: %s", f, writeErr)
}()
if !bytes.Equal(source.in, bin) {
if writeErr = writeSource(f, bin, sig); writeErr != nil {
return
func (source *Source) updateCache(bin, sig []byte, now time.Time) {
file := source.cacheFile
absPath := file
if resolved, err := filepath.Abs(file); err != nil {
absPath = resolved
}
if !bytes.Equal(source.bin, bin) {
if err := writeSource(file, bin, sig); err != nil {
dlog.Warnf("Couldn't write cache file [%s]: %s", absPath, err) // an error writing to the cache isn't fatal
}
}
writeErr = os.Chtimes(f, now, now)
if err := os.Chtimes(file, now, now); err != nil {
dlog.Warnf("Couldn't update cache file [%s]: %s", absPath, err)
}
source.bin = bin
}
func (source *Source) parseURLs(urls []string) {
@ -130,28 +130,32 @@ func (source *Source) parseURLs(urls []string) {
}
}
func fetchFromURL(xTransport *XTransport, u *url.URL) (bin []byte, err error) {
bin, _, _, _, err = xTransport.Get(u, "", DefaultTimeout)
func fetchFromURL(xTransport *XTransport, u *url.URL) ([]byte, error) {
bin, _, _, _, err := xTransport.GetWithCompression(u, "", DefaultTimeout)
return bin, err
}
func (source *Source) fetchWithCache(xTransport *XTransport, now time.Time) (delay time.Duration, err error) {
if delay, err = source.fetchFromCache(now); err != nil {
func (source *Source) fetchWithCache(xTransport *XTransport, now time.Time) (time.Duration, error) {
var err error
var ttl time.Duration
if ttl, err = source.fetchFromCache(now); err != nil {
if len(source.urls) == 0 {
dlog.Errorf("Source [%s] cache file [%s] not present and no valid URL", source.name, source.cacheFile)
return
return 0, err
}
dlog.Debugf("Source [%s] cache file [%s] not present", source.name, source.cacheFile)
}
if len(source.urls) > 0 {
defer func() {
source.refresh = now.Add(delay)
}()
if len(source.urls) == 0 {
return 0, err
}
if len(source.urls) == 0 || delay > 0 {
return
if ttl > 0 {
source.refresh = now.Add(ttl)
return 0, err
}
delay = MinimumPrefetchInterval
ttl = MinimumPrefetchInterval
source.refresh = now.Add(ttl)
var bin, sig []byte
for _, srcURL := range source.urls {
dlog.Infof("Source [%s] loading from URL [%s]", source.name, srcURL)
@ -166,25 +170,43 @@ func (source *Source) fetchWithCache(xTransport *XTransport, now time.Time) (del
dlog.Debugf("Source [%s] failed to download signature from URL [%s]", source.name, sigURL)
continue
}
if err = source.checkSignature(bin, sig); err == nil {
break // valid signature
} // above err check inverted to make use of implicit continue
dlog.Debugf("Source [%s] failed signature check using URL [%s]", source.name, srcURL)
if err = source.checkSignature(bin, sig); err != nil {
dlog.Debugf("Source [%s] failed signature check using URL [%s]", source.name, srcURL)
continue
}
break // valid signature
}
if err != nil {
return
return 0, err
}
source.writeToCache(bin, sig, now)
delay = source.prefetchDelay
return
source.updateCache(bin, sig, now)
ttl = source.prefetchDelay
source.refresh = now.Add(ttl)
return ttl, nil
}
// NewSource loads a new source using the given cacheFile and urls, ensuring it has a valid signature
func NewSource(name string, xTransport *XTransport, urls []string, minisignKeyStr string, cacheFile string, formatStr string, refreshDelay time.Duration, prefix string) (source *Source, err error) {
func NewSource(
name string,
xTransport *XTransport,
urls []string,
minisignKeyStr string,
cacheFile string,
formatStr string,
refreshDelay time.Duration,
prefix string,
) (*Source, error) {
if refreshDelay < DefaultPrefetchDelay {
refreshDelay = DefaultPrefetchDelay
}
source = &Source{name: name, urls: []*url.URL{}, cacheFile: cacheFile, cacheTTL: refreshDelay, prefetchDelay: DefaultPrefetchDelay, prefix: prefix}
source := &Source{
name: name,
urls: []*url.URL{},
cacheFile: cacheFile,
cacheTTL: refreshDelay,
prefetchDelay: DefaultPrefetchDelay,
prefix: prefix,
}
if formatStr == "v2" {
source.format = SourceFormatV2
} else {
@ -196,10 +218,11 @@ func NewSource(name string, xTransport *XTransport, urls []string, minisignKeySt
return source, err
}
source.parseURLs(urls)
if _, err = source.fetchWithCache(xTransport, timeNow()); err == nil {
_, err := source.fetchWithCache(xTransport, timeNow())
if err == nil {
dlog.Noticef("Source [%s] loaded", name)
}
return
return source, err
}
// PrefetchSources downloads latest versions of given sources, ensuring they have a valid signature before caching
@ -214,7 +237,7 @@ func PrefetchSources(xTransport *XTransport, sources []*Source) time.Duration {
if delay, err := source.fetchWithCache(xTransport, now); err != nil {
dlog.Infof("Prefetching [%s] failed: %v, will retry in %v", source.name, err, interval)
} else {
dlog.Debugf("Prefetching [%s] succeeded, next update: %v", source.name, delay)
dlog.Debugf("Prefetching [%s] succeeded, next update in %v min", source.name, delay)
if delay >= MinimumPrefetchInterval && (interval == MinimumPrefetchInterval || interval > delay) {
interval = delay
}
@ -239,7 +262,7 @@ func (source *Source) parseV2() ([]RegisteredServer, error) {
stampErrs = append(stampErrs, stampErr)
dlog.Warn(stampErr)
}
in := string(source.in)
in := string(source.bin)
parts := strings.Split(in, "## ")
if len(parts) < 2 {
return registeredServers, fmt.Errorf("Invalid format for source at [%v]", source.urls)

View File

@ -3,7 +3,6 @@ package main
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
@ -15,9 +14,9 @@ import (
"time"
"github.com/hectane/go-acl"
"github.com/powerman/check"
"github.com/jedisct1/dlog"
"github.com/jedisct1/go-minisign"
"github.com/powerman/check"
)
type SourceFixture struct {
@ -69,7 +68,7 @@ type SourceTestExpect struct {
}
func readFixture(t *testing.T, name string) []byte {
bin, err := ioutil.ReadFile(filepath.Join("testdata", name))
bin, err := os.ReadFile(filepath.Join("testdata", name))
if err != nil {
t.Fatalf("Unable to read test fixture %s: %v", name, err)
}
@ -84,9 +83,9 @@ func writeSourceCache(t *testing.T, e *SourceTestExpect) {
path := e.cachePath + f.suffix
perms := f.perms
if perms == 0 {
perms = 0644
perms = 0o644
}
if err := ioutil.WriteFile(path, f.content, perms); err != nil {
if err := os.WriteFile(path, f.content, perms); err != nil {
t.Fatalf("Unable to write cache file %s: %v", path, err)
}
if err := acl.Chmod(path, perms); err != nil {
@ -108,8 +107,8 @@ func writeSourceCache(t *testing.T, e *SourceTestExpect) {
func checkSourceCache(c *check.C, e *SourceTestExpect) {
for _, f := range e.cache {
path := e.cachePath + f.suffix
_ = acl.Chmod(path, 0644) // don't worry if this fails, reading it will catch the same problem
got, err := ioutil.ReadFile(path)
_ = acl.Chmod(path, 0o644) // don't worry if this fails, reading it will catch the same problem
got, err := os.ReadFile(path)
c.DeepEqual(got, f.content, "Unexpected content for cache file '%s', err %v", path, err)
if f.suffix != "" {
continue
@ -134,7 +133,7 @@ func loadSnakeoil(t *testing.T, d *SourceTestData) {
}
func loadTestSourceNames(t *testing.T, d *SourceTestData) {
files, err := ioutil.ReadDir(filepath.Join("testdata", "sources"))
files, err := os.ReadDir(filepath.Join("testdata", "sources"))
if err != nil {
t.Fatalf("Unable to load list of test sources: %v", err)
}
@ -145,7 +144,7 @@ func loadTestSourceNames(t *testing.T, d *SourceTestData) {
}
}
func generateFixtureState(t *testing.T, d *SourceTestData, suffix, file string, state SourceTestState) {
func generateFixtureState(_ *testing.T, d *SourceTestData, suffix, file string, state SourceTestState) {
if _, ok := d.fixtures[state]; !ok {
d.fixtures[state] = map[string]SourceFixture{}
}
@ -165,7 +164,7 @@ func generateFixtureState(t *testing.T, d *SourceTestData, suffix, file string,
case TestStateReadErr, TestStateReadSigErr:
f.content, f.length = []byte{}, "1"
case TestStateOpenErr, TestStateOpenSigErr:
f.content, f.perms = d.fixtures[TestStateCorrect][file].content[:1], 0200
f.content, f.perms = d.fixtures[TestStateCorrect][file].content[:1], 0o200
}
d.fixtures[state][file] = f
}
@ -196,7 +195,7 @@ func loadFixtures(t *testing.T, d *SourceTestData) {
}
func makeTempDir(t *testing.T, d *SourceTestData) {
name, err := ioutil.TempDir("", "sources_test.go."+t.Name())
name, err := os.MkdirTemp("", "sources_test.go."+t.Name())
if err != nil {
t.Fatalf("Unable to create temporary directory: %v", err)
}
@ -285,9 +284,9 @@ func prepSourceTestCache(t *testing.T, d *SourceTestData, e *SourceTestExpect, s
e.cache = []SourceFixture{d.fixtures[state][source], d.fixtures[state][source+".minisig"]}
switch state {
case TestStateCorrect:
e.Source.in, e.success = e.cache[0].content, true
e.Source.bin, e.success = e.cache[0].content, true
case TestStateExpired:
e.Source.in = e.cache[0].content
e.Source.bin = e.cache[0].content
case TestStatePartial, TestStatePartialSig:
e.err = "signature"
case TestStateMissing, TestStateMissingSig, TestStateOpenErr, TestStateOpenSigErr:
@ -296,7 +295,13 @@ func prepSourceTestCache(t *testing.T, d *SourceTestData, e *SourceTestExpect, s
writeSourceCache(t, e)
}
func prepSourceTestDownload(t *testing.T, d *SourceTestData, e *SourceTestExpect, source string, downloadTest []SourceTestState) {
func prepSourceTestDownload(
_ *testing.T,
d *SourceTestData,
e *SourceTestExpect,
source string,
downloadTest []SourceTestState,
) {
if len(downloadTest) == 0 {
return
}
@ -313,7 +318,11 @@ func prepSourceTestDownload(t *testing.T, d *SourceTestData, e *SourceTestExpect
case TestStateOpenErr, TestStateOpenSigErr:
if u, err := url.Parse(serverURL + path); err == nil {
host, port := ExtractHostAndPort(u.Host, -1)
u.Host = fmt.Sprintf("%s:%d", host, port|0x10000) // high numeric port is parsed but then fails to connect
u.Host = fmt.Sprintf(
"%s:%d",
host,
port|0x10000,
) // high numeric port is parsed but then fails to connect
serverURL = u.String()
}
e.err = "invalid port"
@ -330,7 +339,7 @@ func prepSourceTestDownload(t *testing.T, d *SourceTestData, e *SourceTestExpect
switch state {
case TestStateCorrect:
e.cache = []SourceFixture{d.fixtures[state][source], d.fixtures[state][source+".minisig"]}
e.Source.in, e.success = e.cache[0].content, true
e.Source.bin, e.success = e.cache[0].content, true
fallthrough
case TestStateMissingSig, TestStatePartial, TestStatePartialSig, TestStateReadSigErr:
d.reqExpect[path+".minisig"]++
@ -353,14 +362,17 @@ func prepSourceTestDownload(t *testing.T, d *SourceTestData, e *SourceTestExpect
}
func setupSourceTestCase(t *testing.T, d *SourceTestData, i int,
cacheTest *SourceTestState, downloadTest []SourceTestState) (id string, e *SourceTestExpect) {
cacheTest *SourceTestState, downloadTest []SourceTestState,
) (id string, e *SourceTestExpect) {
id = strconv.Itoa(d.n) + "-" + strconv.Itoa(i)
e = &SourceTestExpect{
cachePath: filepath.Join(d.tempDir, id),
mtime: d.timeNow,
}
e.Source = &Source{name: id, urls: []*url.URL{}, format: SourceFormatV2, minisignKey: d.key,
cacheFile: e.cachePath, cacheTTL: DefaultPrefetchDelay * 3, prefetchDelay: DefaultPrefetchDelay}
e.Source = &Source{
name: id, urls: []*url.URL{}, format: SourceFormatV2, minisignKey: d.key,
cacheFile: e.cachePath, cacheTTL: DefaultPrefetchDelay * 3, prefetchDelay: DefaultPrefetchDelay,
}
if cacheTest != nil {
prepSourceTestCache(t, d, e, d.sources[i], *cacheTest)
i = (i + 1) % len(d.sources) // make the cached and downloaded fixtures different
@ -370,6 +382,10 @@ func setupSourceTestCase(t *testing.T, d *SourceTestData, i int,
}
func TestNewSource(t *testing.T) {
if testing.Verbose() {
dlog.SetLogLevel(dlog.SeverityDebug)
dlog.UseSyslog(false)
}
teardown, d := setupSourceTest(t)
defer teardown()
checkResult := func(t *testing.T, e *SourceTestExpect, got *Source, err error) {
@ -394,7 +410,16 @@ func TestNewSource(t *testing.T) {
{"v2", "", DefaultPrefetchDelay * 3, &SourceTestExpect{err: "Invalid encoded public key", Source: &Source{name: "invalid public key", urls: []*url.URL{}, cacheTTL: DefaultPrefetchDelay * 3, prefetchDelay: DefaultPrefetchDelay}}},
} {
t.Run(tt.e.Source.name, func(t *testing.T) {
got, err := NewSource(tt.e.Source.name, d.xTransport, tt.e.urls, tt.key, tt.e.cachePath, tt.v, tt.refreshDelay, tt.e.prefix)
got, err := NewSource(
tt.e.Source.name,
d.xTransport,
tt.e.urls,
tt.key,
tt.e.cachePath,
tt.v,
tt.refreshDelay,
tt.e.prefix,
)
checkResult(t, tt.e, got, err)
})
}
@ -404,7 +429,16 @@ func TestNewSource(t *testing.T) {
for i := range d.sources {
id, e := setupSourceTestCase(t, d, i, &cacheTest, downloadTest)
t.Run("cache "+cacheTestName+", download "+downloadTestName+"/"+id, func(t *testing.T) {
got, err := NewSource(id, d.xTransport, e.urls, d.keyStr, e.cachePath, "v2", DefaultPrefetchDelay*3, "")
got, err := NewSource(
id,
d.xTransport,
e.urls,
d.keyStr,
e.cachePath,
"v2",
DefaultPrefetchDelay*3,
"",
)
checkResult(t, e, got, err)
})
}
@ -413,6 +447,10 @@ func TestNewSource(t *testing.T) {
}
func TestPrefetchSources(t *testing.T) {
if testing.Verbose() {
dlog.SetLogLevel(dlog.SeverityDebug)
dlog.UseSyslog(false)
}
teardown, d := setupSourceTest(t)
defer teardown()
checkResult := func(t *testing.T, expects []*SourceTestExpect, got time.Duration) {
@ -439,7 +477,7 @@ func TestPrefetchSources(t *testing.T) {
e.mtime = d.timeUpd
s := &Source{}
*s = *e.Source
s.in = nil
s.bin = nil
sources = append(sources, s)
expects = append(expects, e)
}

View File

@ -15,7 +15,9 @@ func (proxy *Proxy) addSystemDListeners() error {
if len(files) > 0 {
if len(proxy.userName) > 0 || proxy.child {
dlog.Fatal("Systemd activated sockets are incompatible with privilege dropping. Remove activated sockets and fill `listen_addresses` in the dnscrypt-proxy configuration file instead.")
dlog.Fatal(
"Systemd activated sockets are incompatible with privilege dropping. Remove activated sockets and fill `listen_addresses` in the dnscrypt-proxy configuration file instead.",
)
}
dlog.Warn("Systemd sockets are untested and unsupported - use at your own risk")
}

View File

@ -62,7 +62,15 @@ func parseTimeRanges(timeRangesStr []TimeRangeStr) ([]TimeRange, error) {
func parseWeeklyRanges(weeklyRangesStr WeeklyRangesStr) (WeeklyRanges, error) {
weeklyRanges := WeeklyRanges{}
weeklyRangesStrX := [7][]TimeRangeStr{weeklyRangesStr.Sun, weeklyRangesStr.Mon, weeklyRangesStr.Tue, weeklyRangesStr.Wed, weeklyRangesStr.Thu, weeklyRangesStr.Fri, weeklyRangesStr.Sat}
weeklyRangesStrX := [7][]TimeRangeStr{
weeklyRangesStr.Sun,
weeklyRangesStr.Mon,
weeklyRangesStr.Tue,
weeklyRangesStr.Wed,
weeklyRangesStr.Thu,
weeklyRangesStr.Fri,
weeklyRangesStr.Sat,
}
for day, weeklyRangeStrX := range weeklyRangesStrX {
timeRanges, err := parseTimeRanges(weeklyRangeStrX)
if err != nil {

View File

@ -2,6 +2,7 @@ package main
import (
"bytes"
"compress/gzip"
"context"
"crypto/sha512"
"crypto/tls"
@ -10,11 +11,11 @@ import (
"encoding/hex"
"errors"
"io"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"sync"
@ -23,6 +24,8 @@ import (
"github.com/jedisct1/dlog"
stamps "github.com/jedisct1/go-dnsstamps"
"github.com/miekg/dns"
"github.com/quic-go/quic-go"
"github.com/quic-go/quic-go/http3"
"golang.org/x/net/http2"
netproxy "golang.org/x/net/proxy"
)
@ -46,21 +49,32 @@ type CachedIPs struct {
cache map[string]*CachedIPItem
}
type AltSupport struct {
sync.RWMutex
cache map[string]uint16
}
type XTransport struct {
transport *http.Transport
h3Transport *http3.RoundTripper
keepAlive time.Duration
timeout time.Duration
cachedIPs CachedIPs
altSupport AltSupport
internalResolvers []string
bootstrapResolvers []string
mainProto string
ignoreSystemDNS bool
internalResolverReady bool
useIPv4 bool
useIPv6 bool
http3 bool
tlsDisableSessionTickets bool
tlsCipherSuite []uint16
proxyDialer *netproxy.Dialer
httpProxyFunction func(*http.Request) (*url.URL, error)
tlsClientCreds DOHClientCreds
keyLogWriter io.Writer
}
func NewXTransport() *XTransport {
@ -69,6 +83,7 @@ func NewXTransport() *XTransport {
}
xTransport := XTransport{
cachedIPs: CachedIPs{cache: make(map[string]*CachedIPItem)},
altSupport: AltSupport{cache: make(map[string]uint16)},
keepAlive: DefaultKeepAlive,
timeout: DefaultTimeout,
bootstrapResolvers: []string{DefaultBootstrapResolver},
@ -78,6 +93,7 @@ func NewXTransport() *XTransport {
useIPv6: false,
tlsDisableSessionTickets: false,
tlsCipherSuite: nil,
keyLogWriter: nil,
}
return &xTransport
}
@ -121,7 +137,7 @@ func (xTransport *XTransport) loadCachedIP(host string) (ip net.IP, expired bool
func (xTransport *XTransport) rebuildTransport() {
dlog.Debug("Rebuilding transport")
if xTransport.transport != nil {
(*xTransport.transport).CloseIdleConnections()
xTransport.transport.CloseIdleConnections()
}
timeout := xTransport.timeout
transport := &http.Transport{
@ -145,7 +161,7 @@ func (xTransport *XTransport) rebuildTransport() {
ipOnly = "[" + cachedIP.String() + "]"
}
} else {
dlog.Debugf("[%s] IP address was not cached", host)
dlog.Debugf("[%s] IP address was not cached in DialContext", host)
}
addrStr = ipOnly + ":" + strconv.Itoa(port)
if xTransport.proxyDialer == nil {
@ -164,11 +180,15 @@ func (xTransport *XTransport) rebuildTransport() {
tlsClientConfig := tls.Config{}
certPool, certPoolErr := x509.SystemCertPool()
if xTransport.keyLogWriter != nil {
tlsClientConfig.KeyLogWriter = xTransport.keyLogWriter
}
if clientCreds.rootCA != "" {
if certPool == nil {
dlog.Fatalf("Additional CAs not supported on this platform: %v", certPoolErr)
}
additionalCaCert, err := ioutil.ReadFile(clientCreds.rootCA)
additionalCaCert, err := os.ReadFile(clientCreds.rootCA)
if err != nil {
dlog.Fatal(err)
}
@ -177,7 +197,7 @@ func (xTransport *XTransport) rebuildTransport() {
if certPool != nil {
// Some operating systems don't include Let's Encrypt ISRG Root X1 certificate yet
var letsEncryptX1Cert = []byte(`-----BEGIN CERTIFICATE-----
letsEncryptX1Cert := []byte(`-----BEGIN CERTIFICATE-----
MIIFazCCA1OgAwIBAgIRAIIQz7DSQONZRGPgu2OCiwAwDQYJKoZIhvcNAQELBQAwTzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2VhcmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMTUwNjA0MTEwNDM4WhcNMzUwNjA0MTEwNDM4WjBPMQswCQYDVQQGEwJVUzEpMCcGA1UEChMgSW50ZXJuZXQgU2VjdXJpdHkgUmVzZWFyY2ggR3JvdXAxFTATBgNVBAMTDElTUkcgUm9vdCBYMTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAK3oJHP0FDfzm54rVygch77ct984kIxuPOZXoHj3dcKi/vVqbvYATyjb3miGbESTtrFj/RQSa78f0uoxmyF+0TM8ukj13Xnfs7j/EvEhmkvBioZxaUpmZmyPfjxwv60pIgbz5MDmgK7iS4+3mX6UA5/TR5d8mUgjU+g4rk8Kb4Mu0UlXjIB0ttov0DiNewNwIRt18jA8+o+u3dpjq+sWT8KOEUt+zwvo/7V3LvSye0rgTBIlDHCNAymg4VMk7BPZ7hm/ELNKjD+Jo2FR3qyHB5T0Y3HsLuJvW5iB4YlcNHlsdu87kGJ55tukmi8mxdAQ4Q7e2RCOFvu396j3x+UCB5iPNgiV5+I3lg02dZ77DnKxHZu8A/lJBdiB3QW0KtZB6awBdpUKD9jf1b0SHzUvKBds0pjBqAlkd25HN7rOrFleaJ1/ctaJxQZBKT5ZPt0m9STJEadao0xAH0ahmbWnOlFuhjuefXKnEgV4We0+UXgVCwOPjdAvBbI+e0ocS3MFEvzG6uBQE3xDk3SzynTnjh8BCNAw1FtxNrQHusEwMFxIt4I7mKZ9YIqioymCzLq9gwQbooMDQaHWBfEbwrbwqHyGO0aoSCqI3Haadr8faqU9GY/rOPNk3sgrDQoo//fb4hVC1CLQJ13hef4Y53CIrU7m2Ys6xt0nUW7/vGT1M0NPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBR5tFnme7bl5AFzgAiIyBpY9umbbjANBgkqhkiG9w0BAQsFAAOCAgEAVR9YqbyyqFDQDLHYGmkgJykIrGF1XIpu+ILlaS/V9lZLubhzEFnTIZd+50xx+7LSYK05qAvqFyFWhfFQDlnrzuBZ6brJFe+GnY+EgPbk6ZGQ3BebYhtF8GaV0nxvwuo77x/Py9auJ/GpsMiu/X1+mvoiBOv/2X/qkSsisRcOj/KKNFtY2PwByVS5uCbMiogziUwthDyC3+6WVwW6LLv3xLfHTjuCvjHIInNzktHCgKQ5ORAzI4JMPJ+GslWYHb4phowim57iaztXOoJwTdwJx4nLCgdNbOhdjsnvzqvHu7UrTkXWStAmzOVyyghqpZXjFaH3pO3JLF+l+/+sKAIuvtd7u+Nxe5AW0wdeRlN8NwdCjNPElpzVmbUq4JUagEiuTDkHzsxHpFKVK7q4+63SM1N95R1NbdWhscdCb+ZAJzVcoyi3B43njTOQ5yOf+1CceWxG1bQVs5ZufpsMljq4Ui0/1lvh+wjChP4kqKOJ2qxq4RgqsahDYVvTH9w7jXbyLeiNdd8XM2w9U/t7y0Ff/9yi0GE44Za4rF2LN9d11TPAmRGunUHBcnWEvgJBQl9nJEiU0Zsnvgc/ubhPgXRR4Xq37Z0j4r7g1SgEEzwxA57demyPxgcYxn/eR44/KJ4EBs+lVDR3veyJm+kXQ99b21/+jh5Xos1AnX5iItreGCc=
-----END CERTIFICATE-----`)
certPool.AppendCertsFromPEM(letsEncryptX1Cert)
@ -187,7 +207,12 @@ func (xTransport *XTransport) rebuildTransport() {
if clientCreds.clientCert != "" {
cert, err := tls.LoadX509KeyPair(clientCreds.clientCert, clientCreds.clientKey)
if err != nil {
dlog.Fatalf("Unable to use certificate [%v] (key: [%v]): %v", clientCreds.clientCert, clientCreds.clientKey, err)
dlog.Fatalf(
"Unable to use certificate [%v] (key: [%v]): %v",
clientCreds.clientCert,
clientCreds.clientKey,
err,
)
}
tlsClientConfig.Certificates = []tls.Certificate{cert}
}
@ -200,6 +225,30 @@ func (xTransport *XTransport) rebuildTransport() {
if xTransport.tlsCipherSuite != nil {
tlsClientConfig.PreferServerCipherSuites = false
tlsClientConfig.CipherSuites = xTransport.tlsCipherSuite
// Go doesn't allow changing the cipher suite with TLS 1.3
// So, check if the requested set of ciphers matches the TLS 1.3 suite.
// If it doesn't, downgrade to TLS 1.2
compatibleSuitesCount := 0
for _, suite := range tls.CipherSuites() {
if suite.Insecure {
continue
}
for _, supportedVersion := range suite.SupportedVersions {
if supportedVersion != tls.VersionTLS13 {
for _, expectedSuiteID := range xTransport.tlsCipherSuite {
if expectedSuiteID == suite.ID {
compatibleSuitesCount += 1
break
}
}
}
}
}
if compatibleSuitesCount != len(tls.CipherSuites()) {
dlog.Notice("Explicit cipher suite configured - downgrading to TLS 1.2")
tlsClientConfig.MaxVersion = tls.VersionTLS12
}
}
}
transport.TLSClientConfig = &tlsClientConfig
@ -208,6 +257,45 @@ func (xTransport *XTransport) rebuildTransport() {
http2Transport.AllowHTTP = false
}
xTransport.transport = transport
if xTransport.http3 {
dial := func(ctx context.Context, addrStr string, tlsCfg *tls.Config, cfg *quic.Config) (quic.EarlyConnection, error) {
dlog.Debugf("Dialing for H3: [%v]", addrStr)
host, port := ExtractHostAndPort(addrStr, stamps.DefaultPort)
ipOnly := host
cachedIP, _ := xTransport.loadCachedIP(host)
network := "udp4"
if cachedIP != nil {
if ipv4 := cachedIP.To4(); ipv4 != nil {
ipOnly = ipv4.String()
} else {
ipOnly = "[" + cachedIP.String() + "]"
network = "udp6"
}
} else {
dlog.Debugf("[%s] IP address was not cached in H3 context", host)
if xTransport.useIPv6 {
if xTransport.useIPv4 {
network = "udp"
} else {
network = "udp6"
}
}
}
addrStr = ipOnly + ":" + strconv.Itoa(port)
udpAddr, err := net.ResolveUDPAddr(network, addrStr)
if err != nil {
return nil, err
}
udpConn, err := net.ListenUDP(network, nil)
if err != nil {
return nil, err
}
tlsCfg.ServerName = host
return quic.DialEarly(ctx, udpConn, udpAddr, tlsCfg, cfg)
}
h3Transport := &http3.RoundTripper{DisableCompression: true, TLSClientConfig: &tlsClientConfig, Dial: dial}
xTransport.h3Transport = h3Transport
}
}
func (xTransport *XTransport) resolveUsingSystem(host string) (ip net.IP, ttl time.Duration, err error) {
@ -238,7 +326,10 @@ func (xTransport *XTransport) resolveUsingSystem(host string) (ip net.IP, ttl ti
return
}
func (xTransport *XTransport) resolveUsingResolver(proto, host string, resolver string) (ip net.IP, ttl time.Duration, err error) {
func (xTransport *XTransport) resolveUsingResolver(
proto, host string,
resolver string,
) (ip net.IP, ttl time.Duration, err error) {
dnsClient := dns.Client{Net: proto}
if xTransport.useIPv4 {
msg := dns.Msg{}
@ -283,17 +374,21 @@ func (xTransport *XTransport) resolveUsingResolver(proto, host string, resolver
return
}
func (xTransport *XTransport) resolveUsingResolvers(proto, host string, resolvers []string) (ip net.IP, ttl time.Duration, err error) {
func (xTransport *XTransport) resolveUsingResolvers(
proto, host string,
resolvers []string,
) (ip net.IP, ttl time.Duration, err error) {
err = errors.New("Empty resolvers")
for i, resolver := range resolvers {
ip, ttl, err = xTransport.resolveUsingResolver(proto, host, resolver)
if err == nil {
if i > 0 {
dlog.Infof("Resolution succeeded with bootstrap resolver %s[%s]", proto, resolver)
dlog.Infof("Resolution succeeded with resolver %s[%s]", proto, resolver)
resolvers[0], resolvers[i] = resolvers[i], resolvers[0]
}
break
}
dlog.Infof("Unable to resolve [%s] using bootstrap resolver %s[%s]: %v", host, proto, resolver, err)
dlog.Infof("Unable to resolve [%s] using resolver [%s] (%s): %v", host, resolver, proto, err)
}
return
}
@ -313,19 +408,37 @@ func (xTransport *XTransport) resolveAndUpdateCache(host string) error {
var foundIP net.IP
var ttl time.Duration
var err error
if !xTransport.ignoreSystemDNS {
foundIP, ttl, err = xTransport.resolveUsingSystem(host)
protos := []string{"udp", "tcp"}
if xTransport.mainProto == "tcp" {
protos = []string{"tcp", "udp"}
}
if xTransport.ignoreSystemDNS || err != nil {
protos := []string{"udp", "tcp"}
if xTransport.mainProto == "tcp" {
protos = []string{"tcp", "udp"}
if xTransport.ignoreSystemDNS {
if xTransport.internalResolverReady {
for _, proto := range protos {
foundIP, ttl, err = xTransport.resolveUsingResolvers(proto, host, xTransport.internalResolvers)
if err == nil {
break
}
}
} else {
err = errors.New("Service is not usable yet")
dlog.Notice(err)
}
} else {
foundIP, ttl, err = xTransport.resolveUsingSystem(host)
if err != nil {
err = errors.New("System DNS is not usable yet")
dlog.Notice(err)
}
}
if err != nil {
for _, proto := range protos {
if err != nil {
dlog.Noticef("System DNS configuration not usable yet, exceptionally resolving [%s] using bootstrap resolvers over %s", host, proto)
} else {
dlog.Debugf("Resolving [%s] using bootstrap resolvers over %s", host, proto)
dlog.Noticef(
"Resolving server host [%s] using bootstrap resolvers over %s",
host,
proto,
)
}
foundIP, ttl, err = xTransport.resolveUsingResolvers(proto, host, xTransport.bootstrapResolvers)
if err == nil {
@ -349,16 +462,50 @@ func (xTransport *XTransport) resolveAndUpdateCache(host string) error {
return err
}
}
if foundIP == nil {
if !xTransport.useIPv4 && xTransport.useIPv6 {
dlog.Warnf("no IPv6 address found for [%s]", host)
} else if xTransport.useIPv4 && !xTransport.useIPv6 {
dlog.Warnf("no IPv4 address found for [%s]", host)
} else {
dlog.Errorf("no IP address found for [%s]", host)
}
}
xTransport.saveCachedIP(host, foundIP, ttl)
dlog.Debugf("[%s] IP address [%s] added to the cache, valid for %v", host, foundIP, ttl)
return nil
}
func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string, contentType string, body *[]byte, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
func (xTransport *XTransport) Fetch(
method string,
url *url.URL,
accept string,
contentType string,
body *[]byte,
timeout time.Duration,
compress bool,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
if timeout <= 0 {
timeout = xTransport.timeout
}
client := http.Client{Transport: xTransport.transport, Timeout: timeout}
client := http.Client{
Transport: xTransport.transport,
Timeout: timeout,
}
host, port := ExtractHostAndPort(url.Host, 443)
hasAltSupport := false
if xTransport.h3Transport != nil {
xTransport.altSupport.RLock()
var altPort uint16
altPort, hasAltSupport = xTransport.altSupport.cache[url.Host]
xTransport.altSupport.RUnlock()
if hasAltSupport {
if int(altPort) == port {
client.Transport = xTransport.h3Transport
dlog.Debugf("Using HTTP/3 transport for [%s]", url.Host)
}
}
}
header := map[string][]string{"User-Agent": {"dnscrypt-proxy"}}
if len(accept) > 0 {
header["Accept"] = []string{accept}
@ -375,14 +522,19 @@ func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string,
url2.RawQuery = qs.Encode()
url = &url2
}
host, _ := ExtractHostAndPort(url.Host, 0)
if xTransport.proxyDialer == nil && strings.HasSuffix(host, ".onion") {
return nil, 0, nil, 0, errors.New("Onion service is not reachable without Tor")
}
if err := xTransport.resolveAndUpdateCache(host); err != nil {
dlog.Errorf("Unable to resolve [%v] - Make sure that the system resolver works, or that `bootstrap_resolvers` has been set to resolvers that can be reached", host)
dlog.Errorf(
"Unable to resolve [%v] - Make sure that the system resolver works, or that `bootstrap_resolvers` has been set to resolvers that can be reached",
host,
)
return nil, 0, nil, 0, err
}
if compress && body == nil {
header["Accept-Encoding"] = []string{"gzip"}
}
req := &http.Request{
Method: method,
URL: url,
@ -391,7 +543,7 @@ func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string,
}
if body != nil {
req.ContentLength = int64(len(*body))
req.Body = ioutil.NopCloser(bytes.NewReader(*body))
req.Body = io.NopCloser(bytes.NewReader(*body))
}
start := time.Now()
resp, err := client.Do(req)
@ -403,7 +555,8 @@ func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string,
err = errors.New(resp.Status)
}
} else {
(*xTransport.transport).CloseIdleConnections()
dlog.Debugf("HTTP client error: [%v] - closing idle connections", err)
xTransport.transport.CloseIdleConnections()
}
statusCode := 503
if resp != nil {
@ -412,14 +565,53 @@ func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string,
if err != nil {
dlog.Debugf("[%s]: [%s]", req.URL, err)
if xTransport.tlsCipherSuite != nil && strings.Contains(err.Error(), "handshake failure") {
dlog.Warnf("TLS handshake failure - Try changing or deleting the tls_cipher_suite value in the configuration file")
dlog.Warnf(
"TLS handshake failure - Try changing or deleting the tls_cipher_suite value in the configuration file",
)
xTransport.tlsCipherSuite = nil
xTransport.rebuildTransport()
}
return nil, statusCode, nil, rtt, err
}
if xTransport.h3Transport != nil && !hasAltSupport {
if alt, found := resp.Header["Alt-Svc"]; found {
dlog.Debugf("Alt-Svc [%s]: [%s]", url.Host, alt)
altPort := uint16(port & 0xffff)
for i, xalt := range alt {
for j, v := range strings.Split(xalt, ";") {
if i >= 8 || j >= 16 {
break
}
v = strings.TrimSpace(v)
if strings.HasPrefix(v, "h3=\":") {
v = strings.TrimPrefix(v, "h3=\":")
v = strings.TrimSuffix(v, "\"")
if xAltPort, err := strconv.ParseUint(v, 10, 16); err == nil && xAltPort <= 65535 {
altPort = uint16(xAltPort)
dlog.Debugf("Using HTTP/3 for [%s]", url.Host)
break
}
}
}
}
xTransport.altSupport.Lock()
xTransport.altSupport.cache[url.Host] = altPort
dlog.Debugf("Caching altPort for [%v]", url.Host)
xTransport.altSupport.Unlock()
}
}
tls := resp.TLS
bin, err := ioutil.ReadAll(io.LimitReader(resp.Body, MaxHTTPBodyLength))
var bodyReader io.ReadCloser = resp.Body
if compress && resp.Header.Get("Content-Encoding") == "gzip" {
bodyReader, err = gzip.NewReader(io.LimitReader(resp.Body, MaxHTTPBodyLength))
if err != nil {
return nil, statusCode, tls, rtt, err
}
defer bodyReader.Close()
}
bin, err := io.ReadAll(io.LimitReader(bodyReader, MaxHTTPBodyLength))
if err != nil {
return nil, statusCode, tls, rtt, err
}
@ -427,15 +619,39 @@ func (xTransport *XTransport) Fetch(method string, url *url.URL, accept string,
return bin, statusCode, tls, rtt, err
}
func (xTransport *XTransport) Get(url *url.URL, accept string, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.Fetch("GET", url, accept, "", nil, timeout)
func (xTransport *XTransport) GetWithCompression(
url *url.URL,
accept string,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.Fetch("GET", url, accept, "", nil, timeout, true)
}
func (xTransport *XTransport) Post(url *url.URL, accept string, contentType string, body *[]byte, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.Fetch("POST", url, accept, contentType, body, timeout)
func (xTransport *XTransport) Get(
url *url.URL,
accept string,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.Fetch("GET", url, accept, "", nil, timeout, false)
}
func (xTransport *XTransport) dohLikeQuery(dataType string, useGet bool, url *url.URL, body []byte, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
func (xTransport *XTransport) Post(
url *url.URL,
accept string,
contentType string,
body *[]byte,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.Fetch("POST", url, accept, contentType, body, timeout, false)
}
func (xTransport *XTransport) dohLikeQuery(
dataType string,
useGet bool,
url *url.URL,
body []byte,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
if useGet {
qs := url.Query()
encBody := base64.RawURLEncoding.EncodeToString(body)
@ -447,10 +663,20 @@ func (xTransport *XTransport) dohLikeQuery(dataType string, useGet bool, url *ur
return xTransport.Post(url, dataType, dataType, &body, timeout)
}
func (xTransport *XTransport) DoHQuery(useGet bool, url *url.URL, body []byte, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
func (xTransport *XTransport) DoHQuery(
useGet bool,
url *url.URL,
body []byte,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.dohLikeQuery("application/dns-message", useGet, url, body, timeout)
}
func (xTransport *XTransport) ObliviousDoHQuery(useGet bool, url *url.URL, body []byte, timeout time.Duration) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
func (xTransport *XTransport) ObliviousDoHQuery(
useGet bool,
url *url.URL,
body []byte,
timeout time.Duration,
) ([]byte, int, *tls.ConnectionState, time.Duration, error) {
return xTransport.dohLikeQuery("application/oblivious-dns-message", useGet, url, body, timeout)
}

185
go.mod
View File

@ -1,170 +1,51 @@
module github.com/dnscrypt/dnscrypt-proxy
go 1.17
go 1.22
require (
github.com/BurntSushi/toml v0.4.1
github.com/BurntSushi/toml v1.3.2
github.com/VividCortex/ewma v1.2.0
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/dchest/safefile v0.0.0-20151022103144-855e8d98f185
github.com/hashicorp/go-immutable-radix v1.3.1
github.com/hashicorp/golang-lru v0.5.4
github.com/hectane/go-acl v0.0.0-20190604041725-da78bae5fc95
github.com/jedisct1/dlog v0.0.0-20210927135244-3381aa132e7f
github.com/jedisct1/go-clocksmith v0.0.0-20210101121932-da382b963868
github.com/jedisct1/go-dnsstamps v0.0.0-20210810213811-61cc83d2a354
github.com/jedisct1/go-hpke-compact v0.0.0-20210927135353-5b1ea328c479
github.com/jedisct1/go-minisign v0.0.0-20210927135422-df01d8d3e6f4
github.com/jedisct1/xsecretbox v0.0.0-20210927135450-ebe41aef7bef
github.com/hectane/go-acl v0.0.0-20230122075934-ca0b05cb1adb
github.com/jedisct1/dlog v0.0.0-20230811132706-443b333ff1b3
github.com/jedisct1/go-clocksmith v0.0.0-20230211133011-392c1afea73e
github.com/jedisct1/go-dnsstamps v0.0.0-20240423203910-07a0735c7774
github.com/jedisct1/go-hpke-compact v0.0.0-20230811132953-4ee502b61f80
github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267
github.com/jedisct1/xsecretbox v0.0.0-20230811132812-b950633f9f1f
github.com/k-sone/critbitgo v1.4.0
github.com/kardianos/service v1.2.0
github.com/miekg/dns v1.1.43
github.com/powerman/check v1.6.0
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519
golang.org/x/net v0.0.0-20210924151903-3ad01bbaa167
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6
gopkg.in/natefinch/lumberjack.v2 v2.0.0
github.com/kardianos/service v1.2.2
github.com/miekg/dns v1.1.59
github.com/opencoff/go-sieve v0.2.1
github.com/powerman/check v1.7.0
github.com/quic-go/quic-go v0.43.1
golang.org/x/crypto v0.23.0
golang.org/x/net v0.25.0
golang.org/x/sys v0.20.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
)
require (
4d63.com/gochecknoglobals v0.0.0-20201008074935-acfc0b28355a // indirect
github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/OpenPeeDeeP/depguard v1.0.1 // indirect
github.com/alexkohler/prealloc v1.0.0 // indirect
github.com/ashanbrown/forbidigo v1.2.0 // indirect
github.com/ashanbrown/makezero v0.0.0-20210520155254-b6261585ddde // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bkielbasa/cyclop v1.2.0 // indirect
github.com/bombsimon/wsl/v3 v3.3.0 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/charithe/durationcheck v0.0.8 // indirect
github.com/chavacava/garif v0.0.0-20210405164556-e8a0a408d6af // indirect
github.com/daixiang0/gci v0.2.8 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/denis-tingajkin/go-header v0.4.2 // indirect
github.com/esimonov/ifshort v1.0.2 // indirect
github.com/ettle/strcase v0.1.1 // indirect
github.com/fatih/color v1.12.0 // indirect
github.com/fatih/structtag v1.2.0 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/fzipp/gocyclo v0.3.1 // indirect
github.com/go-critic/go-critic v0.5.6 // indirect
github.com/go-toolsmith/astcast v1.0.0 // indirect
github.com/go-toolsmith/astcopy v1.0.0 // indirect
github.com/go-toolsmith/astequal v1.0.0 // indirect
github.com/go-toolsmith/astfmt v1.0.0 // indirect
github.com/go-toolsmith/astp v1.0.0 // indirect
github.com/go-toolsmith/strparse v1.0.0 // indirect
github.com/go-toolsmith/typep v1.0.2 // indirect
github.com/go-xmlfmt/xmlfmt v0.0.0-20191208150333-d5b6f63a941b // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/gofrs/flock v0.8.0 // indirect
github.com/golang/protobuf v1.5.0 // indirect
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2 // indirect
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a // indirect
github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 // indirect
github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a // indirect
github.com/golangci/golangci-lint v1.41.1 // indirect
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 // indirect
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca // indirect
github.com/golangci/misspell v0.3.5 // indirect
github.com/golangci/revgrep v0.0.0-20210208091834-cd28932614b5 // indirect
github.com/golangci/unconvert v0.0.0-20180507085042-28b1c447d1f4 // indirect
github.com/google/go-cmp v0.5.5 // indirect
github.com/gordonklaus/ineffassign v0.0.0-20210225214923-2e10b2664254 // indirect
github.com/gostaticanalysis/analysisutil v0.4.1 // indirect
github.com/gostaticanalysis/comment v1.4.1 // indirect
github.com/gostaticanalysis/forcetypeassert v0.0.0-20200621232751-01d4955beaa5 // indirect
github.com/gostaticanalysis/nilerr v0.1.1 // indirect
github.com/hashicorp/errwrap v1.0.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
github.com/hashicorp/go-syslog v1.0.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/jgautheron/goconst v1.5.1 // indirect
github.com/jingyugao/rowserrcheck v1.1.0 // indirect
github.com/jirfag/go-printf-func-name v0.0.0-20200119135958-7558a9eaa5af // indirect
github.com/julz/importas v0.0.0-20210419104244-841f0c0fe66d // indirect
github.com/kisielk/errcheck v1.6.0 // indirect
github.com/kisielk/gotool v1.0.0 // indirect
github.com/kulti/thelper v0.4.0 // indirect
github.com/kunwardeep/paralleltest v1.0.2 // indirect
github.com/kyoh86/exportloopref v0.1.8 // indirect
github.com/ldez/gomoddirectives v0.2.1 // indirect
github.com/ldez/tagliatelle v0.2.0 // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/maratori/testpackage v1.0.1 // indirect
github.com/matoous/godox v0.0.0-20210227103229-6504466cf951 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-isatty v0.0.12 // indirect
github.com/mattn/go-runewidth v0.0.9 // indirect
github.com/mattn/goveralls v0.0.9 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/mbilski/exhaustivestruct v1.2.0 // indirect
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81 // indirect
github.com/mgechev/revive v1.0.7 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.1.2 // indirect
github.com/moricho/tparallel v0.2.1 // indirect
github.com/nakabonne/nestif v0.3.0 // indirect
github.com/nbutton23/zxcvbn-go v0.0.0-20210217022336-fa2cb2858354 // indirect
github.com/nishanths/exhaustive v0.1.0 // indirect
github.com/nishanths/predeclared v0.2.1 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/pelletier/go-toml v1.2.0 // indirect
github.com/phayes/checkstyle v0.0.0-20170904204023-bfd46e6a821d // indirect
github.com/hashicorp/golang-lru v0.5.0 // indirect
github.com/onsi/ginkgo/v2 v2.9.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/polyfloyd/go-errorlint v0.0.0-20210510181950-ab96adb96fea // indirect
github.com/powerman/deepequal v0.1.0 // indirect
github.com/prometheus/client_golang v1.7.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.10.0 // indirect
github.com/prometheus/procfs v0.1.3 // indirect
github.com/quasilyte/go-ruleguard v0.3.4 // indirect
github.com/quasilyte/regex/syntax v0.0.0-20200407221936-30656e2c4a95 // indirect
github.com/ryancurrah/gomodguard v1.2.2 // indirect
github.com/ryanrolds/sqlclosecheck v0.3.0 // indirect
github.com/sanposhiho/wastedassign/v2 v2.0.6 // indirect
github.com/securego/gosec/v2 v2.8.0 // indirect
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/smartystreets/goconvey v1.6.4 // indirect
github.com/sonatard/noctx v0.0.1 // indirect
github.com/sourcegraph/go-diff v0.6.1 // indirect
github.com/spf13/afero v1.1.2 // indirect
github.com/spf13/cast v1.3.0 // indirect
github.com/spf13/cobra v1.1.3 // indirect
github.com/spf13/jwalterweatherman v1.0.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.7.1 // indirect
github.com/ssgreg/nlreturn/v2 v2.1.0 // indirect
github.com/stretchr/objx v0.1.1 // indirect
github.com/stretchr/testify v1.7.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/tdakkota/asciicheck v0.0.0-20200416200610-e657995f937b // indirect
github.com/tetafro/godot v1.4.7 // indirect
github.com/timakin/bodyclose v0.0.0-20200424151742-cb6215831a94 // indirect
github.com/tomarrell/wrapcheck/v2 v2.1.0 // indirect
github.com/tommy-muehle/go-mnd/v2 v2.4.0 // indirect
github.com/ultraware/funlen v0.0.3 // indirect
github.com/ultraware/whitespace v0.0.4 // indirect
github.com/uudashr/gocognit v1.0.1 // indirect
github.com/yeya24/promlinter v0.1.0 // indirect
golang.org/x/mod v0.4.2 // indirect
golang.org/x/text v0.3.6 // indirect
golang.org/x/tools v0.1.3 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20200707001353-8e8330bf89df // indirect
google.golang.org/grpc v1.38.0 // indirect
google.golang.org/protobuf v1.27.0 // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
honnef.co/go/tools v0.2.0 // indirect
mvdan.cc/gofumpt v0.1.1 // indirect
mvdan.cc/interfacer v0.0.0-20180901003855-c20040233aed // indirect
mvdan.cc/lint v0.0.0-20170908181259-adc824a0674b // indirect
mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7 // indirect
github.com/quic-go/qpack v0.4.0 // indirect
github.com/smartystreets/goconvey v1.7.2 // indirect
go.uber.org/mock v0.4.0 // indirect
golang.org/x/exp v0.0.0-20221205204356-47842c84f3db // indirect
golang.org/x/mod v0.16.0 // indirect
golang.org/x/text v0.15.0 // indirect
golang.org/x/tools v0.19.0 // indirect
google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect
google.golang.org/grpc v1.53.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)

1161
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -53,8 +53,8 @@ https://pgl.yoyo.org/adservers/serverlist.php?hostformat=nohtml
# Basic tracking list by Disconnect
# https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
# KAD host file (fraud/adware) without controversies
# https://raw.githubusercontent.com/PolishFiltersTeam/KADhosts/master/KADhosts_without_controversies.txt
# KAD host file (fraud/adware)
# https://raw.githubusercontent.com/PolishFiltersTeam/KADhosts/master/KADomains.txt
# BarbBlock list (spurious and invalid DMCA takedowns)
https://paulgb.github.io/BarbBlock/blacklists/domain-list.txt
@ -92,15 +92,15 @@ https://hostfiles.frogeye.fr/firstparty-trackers.txt
# Steven Black hosts file
# https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
# A list of adserving and tracking sites maintained by @lightswitch05
https://www.github.developerdan.com/hosts/lists/ads-and-tracking-extended.txt
# A list of adserving and tracking sites maintained by @anudeepND
# https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt
https://raw.githubusercontent.com/anudeepND/blacklist/master/adservers.txt
# Anudeep's Blacklist (CoinMiner) - Blocks cryptojacking sites
# https://raw.githubusercontent.com/anudeepND/blacklist/master/CoinMiner.txt
# Block Spotify ads
# https://gitlab.com/CHEF-KOCH/cks-filterlist/-/raw/master/Anti-Corp/Spotify/Spotify-HOSTS.txt
### Spark < Blu Go < Blu < Basic < Ultimate
### (With pornware blocking) Porn < Unified
# Energized Ultimate
@ -113,10 +113,13 @@ https://hostfiles.frogeye.fr/firstparty-trackers.txt
# https://block.energized.pro/blu/formats/domains.txt
# OISD.NL - Blocks ads, phishing, malware, tracking and more. WARNING: this is a huge list.
# https://dbl.oisd.nl/
# https://dblw.oisd.nl/
# OISD.NL (smaller subset) - Blocks ads, phishing, malware, tracking and more. Tries to miminize false positives.
https://hosts.oisd.nl/basic/
https://dblw.oisd.nl/basic/
# OISD.NL (extra) - Blocks ads, phishing, malware, tracking and more. Protection over functionality.
# https://dblw.oisd.nl/extra/
# Captain Miao ad list - Block ads and trackers, especially Chinese and Android trackers
# https://raw.githubusercontent.com/jdlingyu/ad-wars/master/sha_ad_hosts
@ -131,6 +134,7 @@ https://hosts.oisd.nl/basic/
# https://raw.githubusercontent.com/olbat/ut1-blacklists/master/blacklists/adult/domains
# https://block.energized.pro/porn/formats/domains.txt
# https://raw.githubusercontent.com/mhxion/pornaway/master/hosts/porn_sites.txt
# https://dblw.oisd.nl/nsfw/
# Block gambling sites
# https://raw.githubusercontent.com/Sinfonietta/hostfiles/master/gambling-hosts
@ -138,11 +142,13 @@ https://hosts.oisd.nl/basic/
# Block dating websites
# https://raw.githubusercontent.com/olbat/ut1-blacklists/master/blacklists/dating/domains
# https://www.github.developerdan.com/hosts/lists/dating-services-extended.txt
# Block social media sites
# https://raw.githubusercontent.com/Sinfonietta/hostfiles/master/social-hosts
# https://block.energized.pro/extensions/social/formats/domains.txt
# https://raw.githubusercontent.com/olbat/ut1-blacklists/master/blacklists/social_networks/domains
# https://www.github.developerdan.com/hosts/lists/facebook-extended.txt
# Goodbye Ads - Specially designed for mobile ad protection
# https://raw.githubusercontent.com/jerryn70/GoodbyeAds/master/Hosts/GoodbyeAds.txt
@ -152,7 +158,3 @@ https://hosts.oisd.nl/basic/
# Block spying and tracking on Windows
# https://raw.githubusercontent.com/crazy-max/WindowsSpyBlocker/master/data/dnscrypt/spy.txt
# GameIndustry.eu - Block spyware, advertising, analytics, tracking in games and associated clients
# https://www.gameindustry.eu/files/hosts.txt

View File

@ -65,6 +65,7 @@ def parse_list(content, trusted=False):
r"^@*\|\|([a-z0-9][a-z0-9.-]*[.][a-z]{2,})\^?(\$(popup|third-party))?$"
)
rx_l = re.compile(r"^([a-z0-9][a-z0-9.-]*[.][a-z]{2,})$")
rx_lw = re.compile(r"^[*][.]([a-z0-9][a-z0-9.-]*[.][a-z]{2,})$")
rx_h = re.compile(
r"^[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}\s+([a-z0-9][a-z0-9.-]*[.][a-z]{2,})$"
)
@ -75,7 +76,7 @@ def parse_list(content, trusted=False):
names = set()
time_restrictions = {}
globs = set()
rx_set = [rx_u, rx_l, rx_h, rx_mdl, rx_b, rx_dq]
rx_set = [rx_u, rx_l, rx_lw, rx_h, rx_mdl, rx_b, rx_dq]
for line in content.splitlines():
line = str.lower(str.strip(line))
if rx_comment.match(line):
@ -92,7 +93,8 @@ def parse_list(content, trusted=False):
def print_restricted_name(output_fd, name, time_restrictions):
if name in time_restrictions:
print("{}\t{}".format(name, time_restrictions[name]), file=output_fd, end="\n")
print("{}\t{}".format(
name, time_restrictions[name]), file=output_fd, end="\n")
else:
print(
"# ignored: [{}] was in the time-restricted list, "
@ -120,7 +122,8 @@ def load_from_url(url):
except urllib.URLError as err:
raise Exception("[{}] could not be loaded: {}\n".format(url, err))
if trusted is False and response.getcode() != 200:
raise Exception("[{}] returned HTTP code {}\n".format(url, response.getcode()))
raise Exception("[{}] returned HTTP code {}\n".format(
url, response.getcode()))
content = response.read()
if URLLIB_NEW:
content = content.decode("utf-8", errors="replace")
@ -262,10 +265,12 @@ def blocklists_from_config_file(
list_names.sort(key=name_cmp)
if ignored:
print("# Ignored duplicates: {}".format(ignored), file=output_fd, end="\n")
print("# Ignored duplicates: {}".format(
ignored), file=output_fd, end="\n")
if glob_ignored:
print(
"# Ignored due to overlapping local patterns: {}".format(glob_ignored),
"# Ignored due to overlapping local patterns: {}".format(
glob_ignored),
file=output_fd,
end="\n",
)

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2018 Leigh McCulloch
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,154 +0,0 @@
package checknoglobals
import (
"flag"
"fmt"
"go/ast"
"go/token"
"strings"
"golang.org/x/tools/go/analysis"
)
// allowedExpression is a struct representing packages and methods that will
// be an allowed combination to use as a global variable, f.ex. Name `regexp`
// and SelName `MustCompile`.
type allowedExpression struct {
Name string
SelName string
}
const Doc = `check that no global variables exist
This analyzer checks for global variables and errors on any found.
A global variable is a variable declared in package scope and that can be read
and written to by any function within the package. Global variables can cause
side effects which are difficult to keep track of. A code in one function may
change the variables state while another unrelated chunk of code may be
effected by it.`
// Analyzer provides an Analyzer that checks that there are no global
// variables, except for errors and variables containing regular
// expressions.
func Analyzer() *analysis.Analyzer {
return &analysis.Analyzer{
Name: "gochecknoglobals",
Doc: Doc,
Run: checkNoGlobals,
Flags: flags(),
RunDespiteErrors: true,
}
}
func flags() flag.FlagSet {
flags := flag.NewFlagSet("", flag.ExitOnError)
flags.Bool("t", false, "Include tests")
return *flags
}
func isAllowed(v ast.Node) bool {
switch i := v.(type) {
case *ast.Ident:
return i.Name == "_" || i.Name == "version" || looksLikeError(i)
case *ast.CallExpr:
if expr, ok := i.Fun.(*ast.SelectorExpr); ok {
return isAllowedSelectorExpression(expr)
}
case *ast.CompositeLit:
if expr, ok := i.Type.(*ast.SelectorExpr); ok {
return isAllowedSelectorExpression(expr)
}
}
return false
}
func isAllowedSelectorExpression(v *ast.SelectorExpr) bool {
x, ok := v.X.(*ast.Ident)
if !ok {
return false
}
allowList := []allowedExpression{
{Name: "regexp", SelName: "MustCompile"},
}
for _, i := range allowList {
if x.Name == i.Name && v.Sel.Name == i.SelName {
return true
}
}
return false
}
// looksLikeError returns true if the AST identifier starts
// with 'err' or 'Err', or false otherwise.
//
// TODO: https://github.com/leighmcculloch/gochecknoglobals/issues/5
func looksLikeError(i *ast.Ident) bool {
prefix := "err"
if i.IsExported() {
prefix = "Err"
}
return strings.HasPrefix(i.Name, prefix)
}
func checkNoGlobals(pass *analysis.Pass) (interface{}, error) {
includeTests := pass.Analyzer.Flags.Lookup("t").Value.(flag.Getter).Get().(bool)
for _, file := range pass.Files {
filename := pass.Fset.Position(file.Pos()).Filename
if !strings.HasSuffix(filename, ".go") {
continue
}
if !includeTests && strings.HasSuffix(filename, "_test.go") {
continue
}
for _, decl := range file.Decls {
genDecl, ok := decl.(*ast.GenDecl)
if !ok {
continue
}
if genDecl.Tok != token.VAR {
continue
}
for _, spec := range genDecl.Specs {
valueSpec := spec.(*ast.ValueSpec)
onlyAllowedValues := false
for _, vn := range valueSpec.Values {
if isAllowed(vn) {
onlyAllowedValues = true
continue
}
onlyAllowedValues = false
break
}
if onlyAllowedValues {
continue
}
for _, vn := range valueSpec.Names {
if isAllowed(vn) {
continue
}
message := fmt.Sprintf("%s is a global variable", vn.Name)
pass.Report(analysis.Diagnostic{
Pos: vn.Pos(),
Category: "global",
Message: message,
})
}
}
}
}
return nil, nil
}

View File

@ -1,2 +1,2 @@
toml.test
/toml.test
/toml-test

View File

@ -1 +0,0 @@
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).

View File

@ -1,10 +1,5 @@
## TOML parser and encoder for Go with reflection
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
reflection interface similar to Go's standard library `json` and `xml`
packages. This package also supports the `encoding.TextUnmarshaler` and
`encoding.TextMarshaler` interfaces so that you can define custom data
representations. (There is an example of this below.)
reflection interface similar to Go's standard library `json` and `xml` packages.
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
@ -14,28 +9,18 @@ See the [releases page](https://github.com/BurntSushi/toml/releases) for a
changelog; this information is also in the git tag annotations (e.g. `git show
v0.4.0`).
This library requires Go 1.13 or newer; install it with:
This library requires Go 1.13 or newer; add it to your go.mod with:
$ go get github.com/BurntSushi/toml
% go get github.com/BurntSushi/toml@latest
It also comes with a TOML validator CLI tool:
$ go get github.com/BurntSushi/toml/cmd/tomlv
$ tomlv some-toml-file.toml
### Testing
This package passes all tests in
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
and the encoder.
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
% tomlv some-toml-file.toml
### Examples
This package works similarly to how the Go standard library handles XML and
JSON. Namely, data is loaded into Go values via reflection.
For the simplest example, consider some TOML file as just a list of keys
and values:
For the simplest example, consider some TOML file as just a list of keys and
values:
```toml
Age = 25
@ -45,7 +30,7 @@ Perfection = [ 6, 28, 496, 8128 ]
DOB = 1987-07-05T05:45:00Z
```
Which could be defined in Go as:
Which can be decoded with:
```go
type Config struct {
@ -53,21 +38,15 @@ type Config struct {
Cats []string
Pi float64
Perfection []int
DOB time.Time // requires `import time`
DOB time.Time
}
```
And then decoded with:
```go
var conf Config
if _, err := toml.Decode(tomlData, &conf); err != nil {
// handle error
}
_, err := toml.Decode(tomlData, &conf)
```
You can also use struct tags if your struct field name doesn't map to a TOML
key value directly:
You can also use struct tags if your struct field name doesn't map to a TOML key
value directly:
```toml
some_key_NAME = "wat"
@ -75,146 +54,67 @@ some_key_NAME = "wat"
```go
type TOML struct {
ObscureKey string `toml:"some_key_NAME"`
ObscureKey string `toml:"some_key_NAME"`
}
```
Beware that like other most other decoders **only exported fields** are
considered when encoding and decoding; private fields are silently ignored.
Beware that like other decoders **only exported fields** are considered when
encoding and decoding; private fields are silently ignored.
### Using the `encoding.TextUnmarshaler` interface
Here's an example that automatically parses duration strings into
`time.Duration` values:
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
Here's an example that automatically parses values in a `mail.Address`:
```toml
[[song]]
name = "Thunder Road"
duration = "4m49s"
[[song]]
name = "Stairway to Heaven"
duration = "8m03s"
contacts = [
"Donald Duck <donald@duckburg.com>",
"Scrooge McDuck <scrooge@duckburg.com>",
]
```
Which can be decoded with:
Can be decoded with:
```go
type song struct {
Name string
Duration duration
}
type songs struct {
Song []song
}
var favorites songs
if _, err := toml.Decode(blob, &favorites); err != nil {
log.Fatal(err)
// Create address type which satisfies the encoding.TextUnmarshaler interface.
type address struct {
*mail.Address
}
for _, s := range favorites.Song {
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
}
```
And you'll also need a `duration` type that satisfies the
`encoding.TextUnmarshaler` interface:
```go
type duration struct {
time.Duration
}
func (d *duration) UnmarshalText(text []byte) error {
func (a *address) UnmarshalText(text []byte) error {
var err error
d.Duration, err = time.ParseDuration(string(text))
a.Address, err = mail.ParseAddress(string(text))
return err
}
// Decode it.
func decode() {
blob := `
contacts = [
"Donald Duck <donald@duckburg.com>",
"Scrooge McDuck <scrooge@duckburg.com>",
]
`
var contacts struct {
Contacts []address
}
_, err := toml.Decode(blob, &contacts)
if err != nil {
log.Fatal(err)
}
for _, c := range contacts.Contacts {
fmt.Printf("%#v\n", c.Address)
}
// Output:
// &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
// &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
}
```
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
a similar way.
### More complex usage
Here's an example of how to load the example from the official spec page:
```toml
# This is a TOML document. Boom.
title = "TOML Example"
[owner]
name = "Tom Preston-Werner"
organization = "GitHub"
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
[clients]
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
# Line breaks are OK when inside arrays
hosts = [
"alpha",
"omega"
]
```
And the corresponding Go types are:
```go
type tomlConfig struct {
Title string
Owner ownerInfo
DB database `toml:"database"`
Servers map[string]server
Clients clients
}
type ownerInfo struct {
Name string
Org string `toml:"organization"`
Bio string
DOB time.Time
}
type database struct {
Server string
Ports []int
ConnMax int `toml:"connection_max"`
Enabled bool
}
type server struct {
IP string
DC string
}
type clients struct {
Data [][]interface{}
Hosts []string
}
```
Note that a case insensitive match will be tried if an exact match can't be
found.
A working example of the above can be found in `_examples/example.{go,toml}`.
See the [`_example/`](/_example) directory for a more complex example.

View File

@ -1,13 +1,16 @@
package toml
import (
"bytes"
"encoding"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"reflect"
"strconv"
"strings"
"time"
)
@ -18,16 +21,35 @@ type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
// Unmarshal decodes the contents of data in TOML format into a pointer v.
//
// See [Decoder] for a description of the decoding process.
func Unmarshal(data []byte, v interface{}) error {
_, err := NewDecoder(bytes.NewReader(data)).Decode(v)
return err
}
// Decode the TOML data in to the pointer v.
//
// See [Decoder] for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile reads the contents of a file and decodes it with [Decode].
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
//
// This type can be used for any value, which will cause decoding to be delayed.
// You can use the PrimitiveDecode() function to "manually" decode these values.
// You can use [PrimitiveDecode] to "manually" decode these values.
//
// NOTE: The underlying representation of a `Primitive` value is subject to
// change. Do not rely on it.
@ -40,32 +62,25 @@ type Primitive struct {
context Key
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// The significand precision for float32 and float64 is 24 and 53 bits; this is
// the range a natural number can be stored in a float without loss of data.
const (
maxSafeFloat32Int = 16777215 // 2^24-1
maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
)
// Decoder decodes TOML data.
//
// TOML tables correspond to Go structs or maps (dealer's choice they can be
// used interchangeably).
// TOML tables correspond to Go structs or maps; they can be used
// interchangeably, but structs offer better type safety.
//
// TOML table arrays correspond to either a slice of structs or a slice of maps.
//
// TOML datetimes correspond to Go time.Time values. Local datetimes are parsed
// in the local timezone.
// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
// local timezone.
//
// [time.Duration] types are treated as nanoseconds if the TOML value is an
// integer, or they're parsed with time.ParseDuration() if they're strings.
//
// All other TOML types (float, string, int, bool and array) correspond to the
// obvious Go types.
@ -74,9 +89,9 @@ func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
// interface, in which case any primitive TOML value (floats, strings, integers,
// booleans, datetimes) will be converted to a []byte and given to the value's
// UnmarshalText method. See the Unmarshaler example for a demonstration with
// time duration strings.
// email addresses.
//
// Key mapping
// # Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go struct.
// The special `toml` struct tag can be used to map TOML keys to struct fields
@ -100,18 +115,39 @@ func NewDecoder(r io.Reader) *Decoder {
return &Decoder{r: r}
}
var (
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
)
// Decode TOML data in to the pointer `v`.
func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
s := "%q"
if reflect.TypeOf(v) == nil {
s = "%v"
}
return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
}
// TODO: have parser should read from io.Reader? Or at the very least, make
// it read from []byte rather than string
// Check if this is a supported type: struct, map, interface{}, or something
// that implements UnmarshalTOML or UnmarshalText.
rv = indirect(rv)
rt := rv.Type()
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
}
// TODO: parser should read from io.Reader? Or at the very least, make it
// read from []byte rather than string
data, err := ioutil.ReadAll(dec.r)
if err != nil {
return MetaData{}, err
@ -121,29 +157,32 @@ func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
mapping: p.mapping,
keyInfo: p.keyInfo,
keys: p.ordered,
decoded: make(map[string]struct{}, len(p.ordered)),
context: nil,
data: data,
}
return md, md.unify(p.mapping, indirect(rv))
return md, md.unify(p.mapping, rv)
}
// Decode the TOML data in to the pointer v.
// PrimitiveDecode is just like the other Decode* functions, except it decodes a
// TOML value that has already been parsed. Valid primitive values can *only* be
// obtained from values filled by the decoder functions, including this method.
// (i.e., v may contain more [Primitive] values.)
//
// See the documentation on Decoder for a description of the decoding process.
func Decode(data string, v interface{}) (MetaData, error) {
return NewDecoder(strings.NewReader(data)).Decode(v)
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at path and decode it for you.
func DecodeFile(path string, v interface{}) (MetaData, error) {
fp, err := os.Open(path)
if err != nil {
return MetaData{}, err
}
defer fp.Close()
return NewDecoder(fp).Decode(v)
// Meta data for primitive values is included in the meta data returned by the
// Decode* functions with one exception: keys returned by the Undecoded method
// will only reflect keys that were decoded. Namely, any keys hidden behind a
// Primitive will be considered undecoded. Executing this method will update the
// undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// unify performs a sort of type unification based on the structure of `rv`,
@ -154,7 +193,7 @@ func DecodeFile(path string, v interface{}) (MetaData, error) {
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
// TODO: #76 would make this superfluous after implemented.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
if rv.Type() == primitiveType {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
@ -166,17 +205,14 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
rvi := rv.Interface()
if v, ok := rvi.(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
if v, ok := rvi.(encoding.TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// TODO:
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
@ -187,7 +223,6 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
@ -213,17 +248,14 @@ func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
if rv.NumMethod() > 0 { /// Only empty interfaces are supported.
return md.e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
case reflect.Float32, reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
return md.e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
@ -232,7 +264,7 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
return md.e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
@ -254,17 +286,18 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.decoded[md.context.add(key).String()] = struct{}{}
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
err := md.unify(datum, subv)
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
}
}
}
@ -272,10 +305,10 @@ func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
if k := rv.Type().Key().Kind(); k != reflect.String {
return fmt.Errorf(
"toml: cannot decode to a map with non-string key type (%s in %q)",
k, rv.Type())
keyType := rv.Type().Key().Kind()
if keyType != reflect.String && keyType != reflect.Interface {
return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
keyType, rv.Type())
}
tmap, ok := mapping.(map[string]interface{})
@ -283,23 +316,32 @@ func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
if tmap == nil {
return nil
}
return badtype("map", mapping)
return md.badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.decoded[md.context.add(k).String()] = struct{}{}
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
err := md.unify(v, indirect(rvval))
if err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rvkey := indirect(reflect.New(rv.Type().Key()))
switch keyType {
case reflect.Interface:
rvkey.Set(reflect.ValueOf(k))
case reflect.String:
rvkey.SetString(k)
}
rv.SetMapIndex(rvkey, rvval)
}
return nil
@ -311,10 +353,10 @@ func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
return md.badtype("slice", data)
}
if l := datav.Len(); l != rv.Len() {
return e("expected array length %d; got TOML array of length %d", rv.Len(), l)
return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
}
return md.unifySliceArray(datav, rv)
}
@ -325,7 +367,7 @@ func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
return md.badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
@ -346,26 +388,35 @@ func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
_, ok := rv.Interface().(json.Number)
if ok {
if i, ok := data.(int64); ok {
rv.SetString(strconv.FormatInt(i, 10))
} else if f, ok := data.(float64); ok {
rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
} else {
return md.badtype("string", data)
}
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
return md.badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
rvk := rv.Kind()
if num, ok := data.(float64); ok {
switch rv.Kind() {
switch rvk {
case reflect.Float32:
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
fallthrough
case reflect.Float64:
rv.SetFloat(num)
@ -374,54 +425,60 @@ func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
}
return nil
}
return badtype("float", data)
if num, ok := data.(int64); ok {
if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
(rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetFloat(float64(num))
return nil
}
return md.badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
_, ok := rv.Interface().(time.Duration)
if ok {
// Parse as string duration, and fall back to regular integer parsing
// (as nanosecond) if this is not a string.
if s, ok := data.(string); ok {
dur, err := time.ParseDuration(s)
if err != nil {
return md.parseErr(errParseDuration{s})
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
rv.SetInt(int64(dur))
return nil
}
return nil
}
return badtype("integer", data)
num, ok := data.(int64)
if !ok {
return md.badtype("integer", data)
}
rvk := rv.Kind()
switch {
case rvk >= reflect.Int && rvk <= reflect.Int64:
if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
(rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
(rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetInt(num)
case rvk >= reflect.Uint && rvk <= reflect.Uint64:
unum := uint64(num)
if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
return md.parseErr(errParseRange{i: num, size: rvk.String()})
}
rv.SetUint(unum)
default:
panic("unreachable")
}
return nil
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
@ -429,7 +486,7 @@ func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
return md.badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
@ -440,7 +497,13 @@ func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
case Marshaler:
text, err := sdata.MarshalTOML()
if err != nil {
return err
}
s = string(text)
case encoding.TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
@ -457,7 +520,7 @@ func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) erro
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
return md.badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
@ -465,22 +528,54 @@ func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) erro
return nil
}
func (md *MetaData) badtype(dst string, data interface{}) error {
return md.e("incompatible types: TOML value has type %T; destination has type %s", data, dst)
}
func (md *MetaData) parseErr(err error) error {
k := md.context.String()
return ParseError{
LastKey: k,
Position: md.keyInfo[k].pos,
Line: md.keyInfo[k].pos.Line,
err: err,
input: string(md.data),
}
}
func (md *MetaData) e(format string, args ...interface{}) error {
f := "toml: "
if len(md.context) > 0 {
f = fmt.Sprintf("toml: (last key %q): ", md.context)
p := md.keyInfo[md.context.String()].pos
if p.Line > 0 {
f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
}
}
return fmt.Errorf(f+format, args...)
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
// Pointers are followed until the value is not a pointer. New values are
// allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of interest
// to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(encoding.TextUnmarshaler); ok {
pvi := pv.Interface()
if _, ok := pvi.(encoding.TextUnmarshaler); ok {
return pv
}
if _, ok := pvi.(Unmarshaler); ok {
return pv
}
}
@ -496,16 +591,12 @@ func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(encoding.TextUnmarshaler); ok {
rvi := rv.Interface()
if _, ok := rvi.(encoding.TextUnmarshaler); ok {
return true
}
if _, ok := rvi.(Unmarshaler); ok {
return true
}
return false
}
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

View File

@ -1,3 +1,4 @@
//go:build go1.16
// +build go1.16
package toml
@ -6,8 +7,8 @@ import (
"io/fs"
)
// DecodeFS is just like Decode, except it will automatically read the contents
// of the file at `path` from a fs.FS instance.
// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
// [Decode].
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
fp, err := fsys.Open(path)
if err != nil {

View File

@ -1,123 +0,0 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not be
// inferable via reflection. In particular, whether a key has been defined and
// the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined reports if the key exists in the TOML data.
//
// The key should be specified hierarchically, for example to access the TOML
// key "a.b.c" you would use:
//
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key represents any TOML key, including key groups. Use (MetaData).Keys to get
// values of this type.
type Key []string
func (k Key) String() string { return strings.Join(k, ".") }
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
if k[i] == "" {
return `""`
}
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return `"` + quotedReplacer.Replace(k[i]) + `"`
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
//
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific. The list will have the same
// order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

View File

@ -5,29 +5,25 @@ import (
"io"
)
// DEPRECATED!
// TextMarshaler is an alias for encoding.TextMarshaler.
//
// Use the identical encoding.TextMarshaler instead. It is defined here to
// support Go 1.1 and older.
// Deprecated: use encoding.TextMarshaler
type TextMarshaler encoding.TextMarshaler
// DEPRECATED!
// TextUnmarshaler is an alias for encoding.TextUnmarshaler.
//
// Use the identical encoding.TextUnmarshaler instead. It is defined here to
// support Go 1.1 and older.
// Deprecated: use encoding.TextUnmarshaler
type TextUnmarshaler encoding.TextUnmarshaler
// DEPRECATED!
// PrimitiveDecode is an alias for MetaData.PrimitiveDecode().
//
// Use MetaData.PrimitiveDecode instead.
// Deprecated: use MetaData.PrimitiveDecode.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
md := MetaData{decoded: make(map[string]struct{})}
return md.unify(primValue.undecoded, rvalue(v))
}
// DEPRECATED!
// DecodeReader is an alias for NewDecoder(r).Decode(v).
//
// Use NewDecoder(reader).Decode(&v) instead.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
return NewDecoder(r).Decode(v)
}
// Deprecated: use NewDecoder(reader).Decode(&value).
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) { return NewDecoder(r).Decode(v) }

View File

@ -1,13 +1,11 @@
/*
Package toml implements decoding and encoding of TOML files.
This package supports TOML v1.0.0, as listed on https://toml.io
There is also support for delaying decoding with the Primitive type, and
querying the set of keys in a TOML document with the MetaData type.
The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
and can be used to verify if TOML document is valid. It can also be used to
print the type of each key.
*/
// Package toml implements decoding and encoding of TOML files.
//
// This package supports TOML v1.0.0, as specified at https://toml.io
//
// There is also support for delaying decoding with the Primitive type, and
// querying the set of keys in a TOML document with the MetaData type.
//
// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
// and can be used to verify if TOML document is valid. It can also be used to
// print the type of each key.
package toml

View File

@ -3,6 +3,7 @@ package toml
import (
"bufio"
"encoding"
"encoding/json"
"errors"
"fmt"
"io"
@ -21,12 +22,11 @@ type tomlEncodeError struct{ error }
var (
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
errNonString = errors.New("toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New("toml: cannot encode an anonymous field that is not a struct")
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
var dblQuotedReplacer = strings.NewReplacer(
"\"", "\\\"",
"\\", "\\\\",
"\x00", `\u0000`,
@ -64,35 +64,62 @@ var quotedReplacer = strings.NewReplacer(
"\x7f", `\u007f`,
)
var (
marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
)
// Marshaler is the interface implemented by types that can marshal themselves
// into valid TOML.
type Marshaler interface {
MarshalTOML() ([]byte, error)
}
// Encoder encodes a Go to a TOML document.
//
// The mapping between Go values and TOML values should be precisely the same as
// for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.
// for [Decode].
//
// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
// representation.
//
// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
// encoding the value as custom TOML.
//
// If you want to write arbitrary binary data then you will need to use
// something like base64 since TOML does not have any binary types.
//
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
// are encoded first.
//
// Go maps will be sorted alphabetically by key for deterministic output.
//
// The toml struct tag can be used to provide the key name; if omitted the
// struct field name will be used. If the "omitempty" option is present the
// following value will be skipped:
//
// - arrays, slices, maps, and string with len of 0
// - struct with all zero values
// - bool false
//
// If omitzero is given all int and float types with a value of 0 will be
// skipped.
//
// Encoding Go values without a corresponding TOML representation will return an
// error. Examples of this includes maps with non-string keys, slices with nil
// elements, embedded non-struct types, and nested slices containing maps or
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
// is okay, as is []map[string][]string).
//
// NOTE: Only exported keys are encoded due to the use of reflection. Unexported
// NOTE: only exported keys are encoded due to the use of reflection. Unexported
// keys are silently discarded.
type Encoder struct {
// The string to use for a single indentation level. The default is two
// spaces.
// String to use for a single indentation level; default is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
hasWritten bool // written any output to w yet?
}
// NewEncoder create a new Encoder.
@ -103,13 +130,14 @@ func NewEncoder(w io.Writer) *Encoder {
}
}
// Encode writes a TOML representation of the Go value to the Encoder's writer.
// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
//
// An error is returned if the value given cannot be encoded to a valid TOML
// document.
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
err := enc.safeEncode(Key([]string{}), rv)
if err != nil {
return err
}
return enc.w.Flush()
@ -130,17 +158,15 @@ func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch t := rv.Interface().(type) {
case time.Time, encoding.TextMarshaler:
// If we can marshal the type to text, then we use that. This prevents the
// encoder for handling these types as generic structs (or whatever the
// underlying type of a TextMarshaler is).
switch {
case isMarshaler(rv):
enc.writeKeyValue(key, rv, false)
return
// TODO: #76 would make this superfluous after implemented.
case Primitive:
enc.encode(key, reflect.ValueOf(t.undecoded))
case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
return
}
@ -200,17 +226,49 @@ func (enc *Encoder) eElement(rv reflect.Value) {
enc.wf(v.In(time.UTC).Format(format))
}
return
case encoding.TextMarshaler:
// Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
case Marshaler:
s, err := v.MarshalTOML()
if err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
if s == nil {
encPanic(errors.New("MarshalTOML returned nil and no error"))
}
enc.w.Write(s)
return
case encoding.TextMarshaler:
s, err := v.MarshalText()
if err != nil {
encPanic(err)
}
if s == nil {
encPanic(errors.New("MarshalText returned nil and no error"))
}
enc.writeQuoted(string(s))
return
case time.Duration:
enc.writeQuoted(v.String())
return
case json.Number:
n, _ := rv.Interface().(json.Number)
if n == "" { /// Useful zero value.
enc.w.WriteByte('0')
return
} else if v, err := n.Int64(); err == nil {
enc.eElement(reflect.ValueOf(v))
return
} else if v, err := n.Float64(); err == nil {
enc.eElement(reflect.ValueOf(v))
return
}
encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
}
switch rv.Kind() {
case reflect.Ptr:
enc.eElement(rv.Elem())
return
case reflect.String:
enc.writeQuoted(rv.String())
case reflect.Bool:
@ -246,7 +304,7 @@ func (enc *Encoder) eElement(rv reflect.Value) {
case reflect.Interface:
enc.eElement(rv.Elem())
default:
encPanic(fmt.Errorf("unexpected primitive type: %T", rv.Interface()))
encPanic(fmt.Errorf("unexpected type: %T", rv.Interface()))
}
}
@ -260,14 +318,14 @@ func floatAddDecimal(fstr string) string {
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
elem := eindirect(rv.Index(i))
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
@ -281,12 +339,12 @@ func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
trv := eindirect(rv.Index(i))
if isNil(trv) {
continue
}
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.wf("%s[[%s]]", enc.indentStr(key), key)
enc.newline()
enc.eMapOrStruct(key, trv, false)
}
@ -299,14 +357,14 @@ func (enc *Encoder) eTable(key Key, rv reflect.Value) {
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.wf("%s[%s]", enc.indentStr(key), key)
enc.newline()
}
enc.eMapOrStruct(key, rv, false)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
switch rv := eindirect(rv); rv.Kind() {
switch rv.Kind() {
case reflect.Map:
enc.eMap(key, rv, inline)
case reflect.Struct:
@ -328,7 +386,7 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
@ -338,7 +396,7 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
var writeMapKeys = func(mapKeys []string, trailC bool) {
sort.Strings(mapKeys)
for i, mapKey := range mapKeys {
val := rv.MapIndex(reflect.ValueOf(mapKey))
val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
if isNil(val) {
continue
}
@ -364,6 +422,15 @@ func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
}
}
const is32Bit = (32 << (^uint(0) >> 63)) == 32
func pointerTo(t reflect.Type) reflect.Type {
if t.Kind() == reflect.Ptr {
return pointerTo(t.Elem())
}
return t
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table then all keys under it will be in that
@ -380,35 +447,39 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
if f.PkgPath != "" && !f.Anonymous { /// Skip unexported fields.
isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
continue
}
opts := getOptions(f.Tag)
if opts.skip {
continue
}
frv := rv.Field(i)
frv := eindirect(rv.Field(i))
if is32Bit {
// Copy so it works correct on 32bit archs; not clear why this
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
// This also works fine on 64bit, but 32bit archs are somewhat
// rare and this is a wee bit faster.
copyStart := make([]int, len(start))
copy(copyStart, start)
start = copyStart
}
// Treat anonymous struct fields with tag names as though they are
// not anonymous, like encoding/json does.
//
// Non-struct anonymous fields use the normal encoding logic.
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
if getOptions(f.Tag).name == "" {
addFields(t, frv, append(start, f.Index...))
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct && getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), append(start, f.Index...))
}
continue
}
if isEmbed {
if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
addFields(frv.Type(), frv, append(start, f.Index...))
continue
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
if typeIsTable(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
@ -422,21 +493,25 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
fieldType := rt.FieldByIndex(fieldIndex)
fieldVal := rv.FieldByIndex(fieldIndex)
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
}
opts := getOptions(fieldType.Tag)
if opts.skip {
continue
}
if opts.omitempty && isEmpty(fieldVal) {
continue
}
fieldVal = eindirect(fieldVal)
if isNil(fieldVal) { /// Don't write anything for nil fields.
continue
}
keyName := fieldType.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(fieldVal) {
continue
}
if opts.omitzero && isZero(fieldVal) {
continue
}
@ -462,17 +537,32 @@ func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
}
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
// tomlTypeOfGo returns the TOML type name of the Go value's type.
//
// It is used to determine whether the types of array elements are mixed (which
// is forbidden). If the Go value is nil, then it is illegal for it to be an
// array element, and valueIsNil is returned as true.
//
// The type may be `nil`, which means no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
if rv.Kind() == reflect.Struct {
if rv.Type() == timeType {
return tomlDatetime
}
if isMarshaler(rv) {
return tomlString
}
return tomlHash
}
if isMarshaler(rv) {
return tomlString
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
@ -484,7 +574,7 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
if isTableArray(rv) {
return tomlArrayHash
}
return tomlArray
@ -494,56 +584,35 @@ func tomlTypeOfGo(rv reflect.Value) tomlType {
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case encoding.TextMarshaler:
return tomlString
default:
// Someone used a pointer receiver: we can make it work for pointer
// values.
if rv.CanAddr() {
_, ok := rv.Addr().Interface().(encoding.TextMarshaler)
if ok {
return tomlString
}
}
return tomlHash
}
default:
_, ok := rv.Interface().(encoding.TextMarshaler)
if ok {
return tomlString
}
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
panic("") // Need *some* return value
panic("unreachable")
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
func isMarshaler(rv reflect.Value) bool {
return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
}
// isTableArray reports if all entries in the array or slice are a table.
func isTableArray(arr reflect.Value) bool {
if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
return false
}
/// Don't allow nil.
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
if tomlTypeOfGo(rv.Index(i)) == nil {
ret := true
for i := 0; i < arr.Len(); i++ {
tt := tomlTypeOfGo(eindirect(arr.Index(i)))
// Don't allow nil.
if tt == nil {
encPanic(errArrayNilElement)
}
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
if ret && !typeEqual(tomlHash, tt) {
ret = false
}
}
return firstType
return ret
}
type tagOptions struct {
@ -588,8 +657,26 @@ func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Struct:
if rv.Type().Comparable() {
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
}
// Need to also check if all the fields are empty, otherwise something
// like this with uncomparable types will always return true:
//
// type a struct{ field b }
// type b struct{ s []string }
// s := a{field: b{s: []string{"AAA"}}}
for i := 0; i < rv.NumField(); i++ {
if !isEmpty(rv.Field(i)) {
return false
}
}
return true
case reflect.Bool:
return !rv.Bool()
case reflect.Ptr:
return rv.IsNil()
}
return false
}
@ -602,12 +689,21 @@ func (enc *Encoder) newline() {
// Write a key/value pair:
//
// key = <any value>
// key = <any value>
//
// If inline is true it won't add a newline at the end.
// This is also used for "k = v" in inline tables; so something like this will
// be written in three calls:
//
// ┌───────────────────┐
// │ ┌───┐ ┌────┐│
// v v v v vv
// key = {k = 1, k2 = 2}
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
/// Marshaler used on top-level document; call eElement() to just call
/// Marshal{TOML,Text}.
if len(key) == 0 {
encPanic(errNoKey)
enc.eElement(val)
return
}
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
@ -617,7 +713,8 @@ func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
_, err := fmt.Fprintf(enc.w, format, v...)
if err != nil {
encPanic(err)
}
enc.hasWritten = true
@ -631,13 +728,25 @@ func encPanic(err error) {
panic(tomlEncodeError{err})
}
// Resolve any level of pointers to the actual value (e.g. **string → string).
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
if isMarshaler(v) {
return v
}
if v.CanAddr() { /// Special case for marshalers; see #358.
if pv := v.Addr(); isMarshaler(pv) {
return pv
}
}
return v
}
if v.IsNil() {
return v
}
return eindirect(v.Elem())
}
func isNil(rv reflect.Value) bool {

279
vendor/github.com/BurntSushi/toml/error.go generated vendored Normal file
View File

@ -0,0 +1,279 @@
package toml
import (
"fmt"
"strings"
)
// ParseError is returned when there is an error parsing the TOML syntax such as
// invalid syntax, duplicate keys, etc.
//
// In addition to the error message itself, you can also print detailed location
// information with context by using [ErrorWithPosition]:
//
// toml: error: Key 'fruit' was already created and cannot be used as an array.
//
// At line 4, column 2-7:
//
// 2 | fruit = []
// 3 |
// 4 | [[fruit]] # Not allowed
// ^^^^^
//
// [ErrorWithUsage] can be used to print the above with some more detailed usage
// guidance:
//
// toml: error: newlines not allowed within inline tables
//
// At line 1, column 18:
//
// 1 | x = [{ key = 42 #
// ^
//
// Error help:
//
// Inline tables must always be on a single line:
//
// table = {key = 42, second = 43}
//
// It is invalid to split them over multiple lines like so:
//
// # INVALID
// table = {
// key = 42,
// second = 43
// }
//
// Use regular for this:
//
// [table]
// key = 42
// second = 43
type ParseError struct {
Message string // Short technical message.
Usage string // Longer message with usage guidance; may be blank.
Position Position // Position of the error
LastKey string // Last parsed key, may be blank.
// Line the error occurred.
//
// Deprecated: use [Position].
Line int
err error
input string
}
// Position of an error.
type Position struct {
Line int // Line number, starting at 1.
Start int // Start of error, as byte offset starting at 0.
Len int // Lenght in bytes.
}
func (pe ParseError) Error() string {
msg := pe.Message
if msg == "" { // Error from errorf()
msg = pe.err.Error()
}
if pe.LastKey == "" {
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
}
return fmt.Sprintf("toml: line %d (last key %q): %s",
pe.Position.Line, pe.LastKey, msg)
}
// ErrorWithPosition returns the error with detailed location context.
//
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithPosition() string {
if pe.input == "" { // Should never happen, but just in case.
return pe.Error()
}
var (
lines = strings.Split(pe.input, "\n")
col = pe.column(lines)
b = new(strings.Builder)
)
msg := pe.Message
if msg == "" {
msg = pe.err.Error()
}
// TODO: don't show control characters as literals? This may not show up
// well everywhere.
if pe.Position.Len == 1 {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
msg, pe.Position.Line, col+1)
} else {
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
msg, pe.Position.Line, col, col+pe.Position.Len)
}
if pe.Position.Line > 2 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, lines[pe.Position.Line-3])
}
if pe.Position.Line > 1 {
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, lines[pe.Position.Line-2])
}
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, lines[pe.Position.Line-1])
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col), strings.Repeat("^", pe.Position.Len))
return b.String()
}
// ErrorWithUsage returns the error with detailed location context and usage
// guidance.
//
// See the documentation on [ParseError].
func (pe ParseError) ErrorWithUsage() string {
m := pe.ErrorWithPosition()
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
for i := range lines {
if lines[i] != "" {
lines[i] = " " + lines[i]
}
}
return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
}
return m
}
func (pe ParseError) column(lines []string) int {
var pos, col int
for i := range lines {
ll := len(lines[i]) + 1 // +1 for the removed newline
if pos+ll >= pe.Position.Start {
col = pe.Position.Start - pos
if col < 0 { // Should never happen, but just in case.
col = 0
}
break
}
pos += ll
}
return col
}
type (
errLexControl struct{ r rune }
errLexEscape struct{ r rune }
errLexUTF8 struct{ b byte }
errLexInvalidNum struct{ v string }
errLexInvalidDate struct{ v string }
errLexInlineTableNL struct{}
errLexStringNL struct{}
errParseRange struct {
i interface{} // int or float
size string // "int64", "uint16", etc.
}
errParseDuration struct{ d string }
)
func (e errLexControl) Error() string {
return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
}
func (e errLexControl) Usage() string { return "" }
func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
func (e errLexEscape) Usage() string { return usageEscape }
func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
func (e errLexUTF8) Usage() string { return "" }
func (e errLexInvalidNum) Error() string { return fmt.Sprintf("invalid number: %q", e.v) }
func (e errLexInvalidNum) Usage() string { return "" }
func (e errLexInvalidDate) Error() string { return fmt.Sprintf("invalid date: %q", e.v) }
func (e errLexInvalidDate) Usage() string { return "" }
func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
func (e errLexStringNL) Usage() string { return usageStringNewline }
func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
func (e errParseRange) Usage() string { return usageIntOverflow }
func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
func (e errParseDuration) Usage() string { return usageDuration }
const usageEscape = `
A '\' inside a "-delimited string is interpreted as an escape character.
The following escape sequences are supported:
\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
To prevent a '\' from being recognized as an escape character, use either:
- a ' or '''-delimited string; escape characters aren't processed in them; or
- write two backslashes to get a single backslash: '\\'.
If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
instead of '\' will usually also work: "C:/Users/martin".
`
const usageInlineNewline = `
Inline tables must always be on a single line:
table = {key = 42, second = 43}
It is invalid to split them over multiple lines like so:
# INVALID
table = {
key = 42,
second = 43
}
Use regular for this:
[table]
key = 42
second = 43
`
const usageStringNewline = `
Strings must always be on a single line, and cannot span more than one line:
# INVALID
string = "Hello,
world!"
Instead use """ or ''' to split strings over multiple lines:
string = """Hello,
world!"""
`
const usageIntOverflow = `
This number is too large; this may be an error in the TOML, but it can also be a
bug in the program that uses too small of an integer.
The maximum and minimum values are:
size lowest highest
int8 -128 127
int16 -32,768 32,767
int32 -2,147,483,648 2,147,483,647
int64 -9.2 × 10¹ 9.2 × 10¹
uint8 0 255
uint16 0 65535
uint32 0 4294967295
uint64 0 1.8 × 10¹
int refers to int32 on 32-bit systems and int64 on 64-bit systems.
`
const usageDuration = `
A duration must be as "number<unit>", without any spaces. Valid units are:
ns nanoseconds (billionth of a second)
us, µs microseconds (millionth of a second)
ms milliseconds (thousands of a second)
s seconds
m minutes
h hours
You can combine multiple units; for example "5m10s" for 5 minutes and 10
seconds.
`

View File

@ -37,56 +37,43 @@ const (
itemInlineTableEnd
)
const (
eof = 0
comma = ','
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
inlineTableStart = '{'
inlineTableEnd = '}'
)
const eof = 0
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
func (p Position) String() string {
return fmt.Sprintf("at line %d; start %d; length %d", p.Line, p.Start, p.Len)
}
// Allow for backing up up to four runes.
// This is necessary because TOML contains 3-rune tokens (""" and ''').
type lexer struct {
input string
start int
pos int
line int
state stateFn
items chan item
tomlNext bool
// Allow for backing up up to 4 runes. This is necessary because TOML
// contains 3-rune tokens (""" and ''').
prevWidths [4]int
nprev int // how many of prevWidths are in use
// If we emit an eof, we can still back up, but it is not OK to call
// next again.
atEOF bool
nprev int // how many of prevWidths are in use
atEOF bool // If we emit an eof, we can still back up, but it is not OK to call next again.
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
//
// The idea is to reuse parts of the state machine in various places. For
// example, values can appear at the top level or within arbitrarily nested
// arrays. The last state on the stack is used after a value has been lexed.
// Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
typ itemType
val string
err error
pos Position
}
func (lx *lexer) nextItem() item {
@ -96,18 +83,19 @@ func (lx *lexer) nextItem() item {
return item
default:
lx.state = lx.state(lx)
//fmt.Printf(" STATE %-24s current: %-10q stack: %s\n", lx.state, lx.current(), lx.stack)
//fmt.Printf(" STATE %-24s current: %-10s stack: %s\n", lx.state, lx.current(), lx.stack)
}
}
}
func lex(input string) *lexer {
func lex(input string, tomlNext bool) *lexer {
lx := &lexer{
input: input,
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
input: input,
state: lexTop,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
line: 1,
tomlNext: tomlNext,
}
return lx
}
@ -129,13 +117,30 @@ func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx lexer) getPos() Position {
p := Position{
Line: lx.line,
Start: lx.start,
Len: lx.pos - lx.start,
}
if p.Len <= 0 {
p.Len = 1
}
return p
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
// Needed for multiline strings ending with an incomplete UTF-8 sequence.
if lx.start > lx.pos {
lx.error(errLexUTF8{lx.input[lx.pos]})
return
}
lx.items <- item{typ: typ, pos: lx.getPos(), val: lx.current()}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.items <- item{typ: typ, pos: lx.getPos(), val: strings.TrimSpace(lx.current())}
lx.start = lx.pos
}
@ -160,7 +165,13 @@ func (lx *lexer) next() (r rune) {
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
if r == utf8.RuneError {
lx.errorf("invalid UTF-8 byte at position %d (line %d): 0x%02x", lx.pos, lx.line, lx.input[lx.pos])
lx.error(errLexUTF8{lx.input[lx.pos]})
return utf8.RuneError
}
// Note: don't use peek() here, as this calls next().
if isControl(r) || (r == '\r' && (len(lx.input)-1 == lx.pos || lx.input[lx.pos+1] != '\n')) {
lx.errorControlChar(r)
return utf8.RuneError
}
@ -188,6 +199,7 @@ func (lx *lexer) backup() {
lx.prevWidths[1] = lx.prevWidths[2]
lx.prevWidths[2] = lx.prevWidths[3]
lx.nprev--
lx.pos -= w
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
@ -223,18 +235,58 @@ func (lx *lexer) skip(pred func(rune) bool) {
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// error stops all lexing by emitting an error and returning `nil`.
//
// Note that any value that is a character is escaped if it's a special
// character (newlines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
func (lx *lexer) error(err error) stateFn {
if lx.atEOF {
return lx.errorPrevLine(err)
}
lx.items <- item{typ: itemError, pos: lx.getPos(), err: err}
return nil
}
// errorfPrevline is like error(), but sets the position to the last column of
// the previous line.
//
// This is so that unexpected EOF or NL errors don't show on a new blank line.
func (lx *lexer) errorPrevLine(err error) stateFn {
pos := lx.getPos()
pos.Line--
pos.Len = 1
pos.Start = lx.pos - 1
lx.items <- item{typ: itemError, pos: pos, err: err}
return nil
}
// errorPos is like error(), but allows explicitly setting the position.
func (lx *lexer) errorPos(start, length int, err error) stateFn {
pos := lx.getPos()
pos.Start = start
pos.Len = length
lx.items <- item{typ: itemError, pos: pos, err: err}
return nil
}
// errorf is like error, and creates a new error.
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
if lx.atEOF {
pos := lx.getPos()
pos.Line--
pos.Len = 1
pos.Start = lx.pos - 1
lx.items <- item{typ: itemError, pos: pos, err: fmt.Errorf(format, values...)}
return nil
}
lx.items <- item{typ: itemError, pos: lx.getPos(), err: fmt.Errorf(format, values...)}
return nil
}
func (lx *lexer) errorControlChar(cc rune) stateFn {
return lx.errorPos(lx.pos-1, 1, errLexControl{cc})
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
@ -242,10 +294,10 @@ func lexTop(lx *lexer) stateFn {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
case '#':
lx.push(lexTop)
return lexCommentStart
case tableStart:
case '[':
return lexTableStart
case eof:
if lx.pos > lx.start {
@ -268,7 +320,7 @@ func lexTop(lx *lexer) stateFn {
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
case r == '#':
// a comment will read to a newline for us.
lx.push(lexTop)
return lexCommentStart
@ -292,7 +344,7 @@ func lexTopEnd(lx *lexer) stateFn {
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
if lx.peek() == '[' {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
@ -309,10 +361,8 @@ func lexTableEnd(lx *lexer) stateFn {
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf(
"expected end of table array name delimiter %q, but got %q instead",
arrayTableEnd, r)
if r := lx.next(); r != ']' {
return lx.errorf("expected end of table array name delimiter ']', but got %q instead", r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
@ -321,11 +371,11 @@ func lexArrayTableEnd(lx *lexer) stateFn {
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
case r == ']' || r == eof:
return lx.errorf("unexpected end of table name (table names cannot be empty)")
case r == tableSep:
case r == '.':
return lx.errorf("unexpected table separator (table names cannot be empty)")
case r == stringStart || r == rawStringStart:
case r == '"' || r == '\'':
lx.ignore()
lx.push(lexTableNameEnd)
return lexQuotedName
@ -342,10 +392,10 @@ func lexTableNameEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
case r == '.':
lx.ignore()
return lexTableNameStart
case r == tableEnd:
case r == ']':
return lx.pop()
default:
return lx.errorf("expected '.' or ']' to end table name, but got %q instead", r)
@ -360,7 +410,7 @@ func lexTableNameEnd(lx *lexer) stateFn {
// Lexes only one part, e.g. only 'a' inside 'a.b'.
func lexBareName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
if isBareKeyChar(r, lx.tomlNext) {
return lexBareName
}
lx.backup()
@ -379,10 +429,10 @@ func lexQuotedName(lx *lexer) stateFn {
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case r == stringStart:
case r == '"':
lx.ignore() // ignore the '"'
return lexString
case r == rawStringStart:
case r == '\'':
lx.ignore() // ignore the "'"
return lexRawString
case r == eof:
@ -400,7 +450,7 @@ func lexKeyStart(lx *lexer) stateFn {
return lx.errorf("unexpected '=': key name appears blank")
case r == '.':
return lx.errorf("unexpected '.': keys cannot start with a '.'")
case r == stringStart || r == rawStringStart:
case r == '"' || r == '\'':
lx.ignore()
fallthrough
default: // Bare key
@ -416,7 +466,7 @@ func lexKeyNameStart(lx *lexer) stateFn {
return lx.errorf("unexpected '='")
case r == '.':
return lx.errorf("unexpected '.'")
case r == stringStart || r == rawStringStart:
case r == '"' || r == '\'':
lx.ignore()
lx.push(lexKeyEnd)
return lexQuotedName
@ -434,7 +484,7 @@ func lexKeyEnd(lx *lexer) stateFn {
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
case r == eof:
return lx.errorf("unexpected EOF; expected key separator %q", keySep)
return lx.errorf("unexpected EOF; expected key separator '='")
case r == '.':
lx.ignore()
return lexKeyNameStart
@ -461,17 +511,17 @@ func lexValue(lx *lexer) stateFn {
return lexNumberOrDateStart
}
switch r {
case arrayStart:
case '[':
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case inlineTableStart:
case '{':
lx.ignore()
lx.emit(itemInlineTableStart)
return lexInlineTableValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
case '"':
if lx.accept('"') {
if lx.accept('"') {
lx.ignore() // Ignore """
return lexMultilineString
}
@ -479,9 +529,9 @@ func lexValue(lx *lexer) stateFn {
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
case '\'':
if lx.accept('\'') {
if lx.accept('\'') {
lx.ignore() // Ignore """
return lexMultilineRawString
}
@ -520,14 +570,12 @@ func lexArrayValue(lx *lexer) stateFn {
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
case r == '#':
lx.push(lexArrayValue)
return lexCommentStart
case r == comma:
case r == ',':
return lx.errorf("unexpected comma")
case r == arrayEnd:
// NOTE(caleb): The spec isn't clear about whether you can have
// a trailing comma or not, so we'll allow it.
case r == ']':
return lexArrayEnd
}
@ -540,22 +588,20 @@ func lexArrayValue(lx *lexer) stateFn {
// the next value (or the end of the array): it ignores whitespace and newlines
// and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
switch r := lx.next(); {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
case r == '#':
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == comma:
case r == ',':
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
case r == ']':
return lexArrayEnd
default:
return lx.errorf("expected a comma (',') or array terminator (']'), but got %s", runeOrEOF(r))
}
return lx.errorf(
"expected a comma or array terminator %q, but got %s instead",
arrayEnd, runeOrEOF(r))
}
// lexArrayEnd finishes the lexing of an array.
@ -574,13 +620,16 @@ func lexInlineTableValue(lx *lexer) stateFn {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValue)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
if lx.tomlNext {
return lexSkip(lx, lexInlineTableValue)
}
return lx.errorPrevLine(errLexInlineTableNL{})
case r == '#':
lx.push(lexInlineTableValue)
return lexCommentStart
case r == comma:
case r == ',':
return lx.errorf("unexpected comma")
case r == inlineTableEnd:
case r == '}':
return lexInlineTableEnd
}
lx.backup()
@ -596,23 +645,27 @@ func lexInlineTableValueEnd(lx *lexer) stateFn {
case isWhitespace(r):
return lexSkip(lx, lexInlineTableValueEnd)
case isNL(r):
return lx.errorf("newlines not allowed within inline tables")
case r == commentStart:
if lx.tomlNext {
return lexSkip(lx, lexInlineTableValueEnd)
}
return lx.errorPrevLine(errLexInlineTableNL{})
case r == '#':
lx.push(lexInlineTableValueEnd)
return lexCommentStart
case r == comma:
case r == ',':
lx.ignore()
lx.skip(isWhitespace)
if lx.peek() == '}' {
if lx.tomlNext {
return lexInlineTableValueEnd
}
return lx.errorf("trailing comma not allowed in inline tables")
}
return lexInlineTableValue
case r == inlineTableEnd:
case r == '}':
return lexInlineTableEnd
default:
return lx.errorf(
"expected a comma or an inline table terminator %q, but got %s instead",
inlineTableEnd, runeOrEOF(r))
return lx.errorf("expected a comma or an inline table terminator '}', but got %s instead", runeOrEOF(r))
}
}
@ -638,14 +691,12 @@ func lexString(lx *lexer) stateFn {
switch {
case r == eof:
return lx.errorf(`unexpected EOF; expected '"'`)
case isControl(r) || r == '\r':
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
case isNL(r):
return lx.errorf("strings cannot contain newlines")
return lx.errorPrevLine(errLexStringNL{})
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
case r == '"':
lx.backup()
lx.emit(itemString)
lx.next()
@ -660,26 +711,33 @@ func lexString(lx *lexer) stateFn {
func lexMultilineString(lx *lexer) stateFn {
r := lx.next()
switch r {
default:
return lexMultilineString
case eof:
return lx.errorf(`unexpected EOF; expected '"""'`)
case '\r':
if lx.peek() != '\n' {
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
}
return lexMultilineString
case '\\':
return lexMultilineStringEscape
case stringEnd:
case '"':
/// Found " → try to read two more "".
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
if lx.accept('"') {
if lx.accept('"') {
/// Peek ahead: the string can contain " and "", including at the
/// end: """str"""""
/// 6 or more at the end, however, is an error.
if lx.peek() == stringEnd {
if lx.peek() == '"' {
/// Check if we already lexed 5 's; if so we have 6 now, and
/// that's just too many man!
if strings.HasSuffix(lx.current(), `"""""`) {
///
/// Second check is for the edge case:
///
/// two quotes allowed.
/// vv
/// """lol \""""""
/// ^^ ^^^---- closing three
/// escaped
///
/// But ugly, but it works
if strings.HasSuffix(lx.current(), `"""""`) && !strings.HasSuffix(lx.current(), `\"""""`) {
return lx.errorf(`unexpected '""""""'`)
}
lx.backup()
@ -699,12 +757,8 @@ func lexMultilineString(lx *lexer) stateFn {
}
lx.backup()
}
return lexMultilineString
}
if isControl(r) {
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
@ -712,43 +766,39 @@ func lexMultilineString(lx *lexer) stateFn {
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
default:
return lexRawString
case r == eof:
return lx.errorf(`unexpected EOF; expected "'"`)
case isControl(r) || r == '\r':
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
case isNL(r):
return lx.errorf("strings cannot contain newlines")
case r == rawStringEnd:
return lx.errorPrevLine(errLexStringNL{})
case r == '\'':
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'''" has already been consumed and
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such a
// string. It assumes that the beginning triple-' has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
r := lx.next()
switch r {
default:
return lexMultilineRawString
case eof:
return lx.errorf(`unexpected EOF; expected "'''"`)
case '\r':
if lx.peek() != '\n' {
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
}
return lexMultilineRawString
case rawStringEnd:
case '\'':
/// Found ' → try to read two more ''.
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
if lx.accept('\'') {
if lx.accept('\'') {
/// Peek ahead: the string can contain ' and '', including at the
/// end: '''str'''''
/// 6 or more at the end, however, is an error.
if lx.peek() == rawStringEnd {
if lx.peek() == '\'' {
/// Check if we already lexed 5 's; if so we have 6 now, and
/// that's just too many man!
if strings.HasSuffix(lx.current(), "'''''") {
@ -771,19 +821,14 @@ func lexMultilineRawString(lx *lexer) stateFn {
}
lx.backup()
}
return lexMultilineRawString
}
if isControl(r) {
return lx.errorf("control characters are not allowed inside strings: '0x%02x'", r)
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
if isNL(lx.next()) { /// \ escaping newline.
return lexMultilineString
}
lx.backup()
@ -794,6 +839,11 @@ func lexMultilineStringEscape(lx *lexer) stateFn {
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'e':
if !lx.tomlNext {
return lx.error(errLexEscape{r})
}
fallthrough
case 'b':
fallthrough
case 't':
@ -812,13 +862,30 @@ func lexStringEscape(lx *lexer) stateFn {
fallthrough
case '\\':
return lx.pop()
case 'x':
if !lx.tomlNext {
return lx.error(errLexEscape{r})
}
return lexHexEscape
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("invalid escape character %q; only the following escape characters are allowed: "+
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
return lx.error(errLexEscape{r})
}
func lexHexEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 2; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf(
`expected two hexadecimal digits after '\x', but got %q instead`,
lx.current())
}
}
return lx.pop()
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
@ -1108,8 +1175,6 @@ func lexComment(lx *lexer) stateFn {
lx.backup()
lx.emit(itemText)
return lx.pop()
case isControl(r):
return lx.errorf("control characters are not allowed inside comments: '0x%02x'", r)
default:
return lexComment
}
@ -1121,52 +1186,6 @@ func lexSkip(lx *lexer, nextState stateFn) stateFn {
return nextState
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
// Control characters except \n, \t
func isControl(r rune) bool {
switch r {
case '\t', '\r', '\n':
return false
default:
return (r >= 0x00 && r <= 0x1f) || r == 0x7f
}
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isOctal(r rune) bool {
return r >= '0' && r <= '7'
}
func isBinary(r rune) bool {
return r == '0' || r == '1'
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (s stateFn) String() string {
name := runtime.FuncForPC(reflect.ValueOf(s).Pointer()).Name()
if i := strings.LastIndexByte(name, '.'); i > -1 {
@ -1223,3 +1242,42 @@ func (itype itemType) String() string {
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}
func isWhitespace(r rune) bool { return r == '\t' || r == ' ' }
func isNL(r rune) bool { return r == '\n' || r == '\r' }
func isControl(r rune) bool { // Control characters except \t, \r, \n
switch r {
case '\t', '\r', '\n':
return false
default:
return (r >= 0x00 && r <= 0x1f) || r == 0x7f
}
}
func isDigit(r rune) bool { return r >= '0' && r <= '9' }
func isBinary(r rune) bool { return r == '0' || r == '1' }
func isOctal(r rune) bool { return r >= '0' && r <= '7' }
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') || (r >= 'a' && r <= 'f') || (r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune, tomlNext bool) bool {
if tomlNext {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' || r == '-' ||
r == 0xb2 || r == 0xb3 || r == 0xb9 || (r >= 0xbc && r <= 0xbe) ||
(r >= 0xc0 && r <= 0xd6) || (r >= 0xd8 && r <= 0xf6) || (r >= 0xf8 && r <= 0x037d) ||
(r >= 0x037f && r <= 0x1fff) ||
(r >= 0x200c && r <= 0x200d) || (r >= 0x203f && r <= 0x2040) ||
(r >= 0x2070 && r <= 0x218f) || (r >= 0x2460 && r <= 0x24ff) ||
(r >= 0x2c00 && r <= 0x2fef) || (r >= 0x3001 && r <= 0xd7ff) ||
(r >= 0xf900 && r <= 0xfdcf) || (r >= 0xfdf0 && r <= 0xfffd) ||
(r >= 0x10000 && r <= 0xeffff)
}
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' || r == '-'
}

121
vendor/github.com/BurntSushi/toml/meta.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
package toml
import (
"strings"
)
// MetaData allows access to meta information about TOML data that's not
// accessible otherwise.
//
// It allows checking if a key is defined in the TOML data, whether any keys
// were undecoded, and the TOML type of a key.
type MetaData struct {
context Key // Used only during decoding.
keyInfo map[string]keyInfo
mapping map[string]interface{}
keys []Key
decoded map[string]struct{}
data []byte // Input file; for errors.
}
// IsDefined reports if the key exists in the TOML data.
//
// The key should be specified hierarchically, for example to access the TOML
// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
//
// Returns false for an empty key.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var (
hash map[string]interface{}
ok bool
hashOrVal interface{} = md.mapping
)
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that does
// not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
if ki, ok := md.keyInfo[Key(key).String()]; ok {
return ki.tomlType.typeString()
}
return ""
}
// Keys returns a slice of every key in the TOML data, including key groups.
//
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific. The list will have the same
// order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a [Primitive] value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if _, ok := md.decoded[key.String()]; !ok {
undecoded = append(undecoded, key)
}
}
return undecoded
}
// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
// values of this type.
type Key []string
func (k Key) String() string {
ss := make([]string, len(k))
for i := range k {
ss[i] = k.maybeQuoted(i)
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
if k[i] == "" {
return `""`
}
for _, c := range k[i] {
if !isBareKeyChar(c, false) {
return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
}
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}

View File

@ -1,8 +1,8 @@
package toml
import (
"errors"
"fmt"
"os"
"strconv"
"strings"
"time"
@ -12,35 +12,32 @@ import (
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
lx *lexer
context Key // Full key for the current hash in scope.
currentKey string // Base key name for everything except hashes.
pos Position // Current position in the TOML file.
tomlNext bool
ordered []Key // List of keys in the order that they appear in the TOML data.
context Key // Full key for the current hash in scope.
currentKey string // Base key name for everything except hashes.
approxLine int // Rough approximation of line number
implicits map[string]bool // Record implied keys (e.g. 'key.group.names').
ordered []Key // List of keys in the order that they appear in the TOML data.
keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
mapping map[string]interface{} // Map keyname → key value.
implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
}
// ParseError is used when a file can't be parsed: for example invalid integer
// literals, duplicate keys, etc.
type ParseError struct {
Message string
Line int
LastKey string
}
func (pe ParseError) Error() string {
return fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
pe.Line, pe.LastKey, pe.Message)
type keyInfo struct {
pos Position
tomlType tomlType
}
func parse(data string) (p *parser, err error) {
_, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(ParseError); ok {
if pErr, ok := r.(ParseError); ok {
pErr.input = data
err = pErr
return
}
panic(r)
@ -48,9 +45,12 @@ func parse(data string) (p *parser, err error) {
}()
// Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
// which mangles stuff.
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") {
// which mangles stuff. UTF-16 BOM isn't strictly valid, but some tools add
// it anyway.
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
data = data[2:]
} else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
data = data[3:]
}
// Examine first few bytes for NULL bytes; this probably means it's a UTF-16
@ -60,16 +60,22 @@ func parse(data string) (p *parser, err error) {
if len(data) < 6 {
ex = len(data)
}
if strings.ContainsRune(data[:ex], 0) {
return nil, errors.New("files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8")
if i := strings.IndexRune(data[:ex], 0); i > -1 {
return nil, ParseError{
Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
Position: Position{Line: 1, Start: i, Len: 1},
Line: 1,
input: data,
}
}
p = &parser{
keyInfo: make(map[string]keyInfo),
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
lx: lex(data, tomlNext),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
implicits: make(map[string]struct{}),
tomlNext: tomlNext,
}
for {
item := p.next()
@ -82,24 +88,57 @@ func parse(data string) (p *parser, err error) {
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf(format, v...)
func (p *parser) panicErr(it item, err error) {
panic(ParseError{
Message: msg,
Line: p.approxLine,
LastKey: p.current(),
err: err,
Position: it.pos,
Line: it.pos.Len,
LastKey: p.current(),
})
}
func (p *parser) panicItemf(it item, format string, v ...interface{}) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: it.pos,
Line: it.pos.Len,
LastKey: p.current(),
})
}
func (p *parser) panicf(format string, v ...interface{}) {
panic(ParseError{
Message: fmt.Sprintf(format, v...),
Position: p.pos,
Line: p.pos.Line,
LastKey: p.current(),
})
}
func (p *parser) next() item {
it := p.lx.nextItem()
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.line, it.val)
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
if it.typ == itemError {
p.panicf("%s", it.val)
if it.err != nil {
panic(ParseError{
Position: it.pos,
Line: it.pos.Line,
LastKey: p.current(),
err: it.err,
})
}
p.panicItemf(it, "%s", it.val)
}
return it
}
func (p *parser) nextPos() item {
it := p.next()
p.pos = it.pos
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
@ -119,11 +158,9 @@ func (p *parser) assertEqual(expected, got itemType) {
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart: // # ..
p.approxLine = item.line
p.expect(itemText)
case itemTableStart: // [ .. ]
name := p.next()
p.approxLine = name.line
name := p.nextPos()
var key Key
for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
@ -132,11 +169,10 @@ func (p *parser) topLevel(item item) {
p.assertEqual(itemTableEnd, name.typ)
p.addContext(key, false)
p.setType("", tomlHash)
p.setType("", tomlHash, item.pos)
p.ordered = append(p.ordered, key)
case itemArrayTableStart: // [[ .. ]]
name := p.next()
p.approxLine = name.line
name := p.nextPos()
var key Key
for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
@ -145,13 +181,12 @@ func (p *parser) topLevel(item item) {
p.assertEqual(itemArrayTableEnd, name.typ)
p.addContext(key, true)
p.setType("", tomlArrayHash)
p.setType("", tomlArrayHash, item.pos)
p.ordered = append(p.ordered, key)
case itemKeyStart: // key = ..
outerContext := p.context
/// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
k := p.next()
p.approxLine = k.line
k := p.nextPos()
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
@ -167,11 +202,12 @@ func (p *parser) topLevel(item item) {
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Set value.
val, typ := p.value(p.next(), false)
p.set(p.currentKey, val, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
vItem := p.next()
val, typ := p.value(vItem, false)
p.set(p.currentKey, val, typ, vItem.pos)
/// Remove the context we added (preserving any context from [tbl] lines).
p.context = outerContext
@ -206,9 +242,9 @@ var datetimeRepl = strings.NewReplacer(
func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
case itemMultilineString:
return p.replaceEscapes(stripFirstNewline(stripEscapedNewlines(it.val))), p.typeOfPrimitive(it)
return p.replaceEscapes(it, p.stripEscapedNewlines(stripFirstNewline(it.val))), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
@ -240,10 +276,10 @@ func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
func (p *parser) valueInteger(it item) (interface{}, tomlType) {
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits", it.val)
p.panicItemf(it, "Invalid integer %q: underscores must be surrounded by digits", it.val)
}
if numHasLeadingZero(it.val) {
p.panicf("Invalid integer %q: cannot have leading zeroes", it.val)
p.panicItemf(it, "Invalid integer %q: cannot have leading zeroes", it.val)
}
num, err := strconv.ParseInt(it.val, 0, 64)
@ -254,7 +290,7 @@ func (p *parser) valueInteger(it item) (interface{}, tomlType) {
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit signed integers.", it.val)
p.panicErr(it, errParseRange{i: it.val, size: "int64"})
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
@ -272,18 +308,18 @@ func (p *parser) valueFloat(it item) (interface{}, tomlType) {
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be surrounded by digits", it.val)
p.panicItemf(it, "Invalid float %q: underscores must be surrounded by digits", it.val)
}
}
if len(parts) > 0 && numHasLeadingZero(parts[0]) {
p.panicf("Invalid float %q: cannot have leading zeroes", it.val)
p.panicItemf(it, "Invalid float %q: cannot have leading zeroes", it.val)
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed by one or more digits", it.val)
p.panicItemf(it, "Invalid float %q: '.' must be followed by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
if val == "+nan" || val == "-nan" { // Go doesn't support this, but TOML spec does.
@ -292,9 +328,9 @@ func (p *parser) valueFloat(it item) (interface{}, tomlType) {
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit IEEE-754 floating-point numbers.", it.val)
p.panicErr(it, errParseRange{i: it.val, size: "float64"})
} else {
p.panicf("Invalid float value: %q", it.val)
p.panicItemf(it, "Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
@ -303,11 +339,17 @@ func (p *parser) valueFloat(it item) (interface{}, tomlType) {
var dtTypes = []struct {
fmt string
zone *time.Location
next bool
}{
{time.RFC3339Nano, time.Local},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime},
{"2006-01-02", internal.LocalDate},
{"15:04:05.999999999", internal.LocalTime},
{time.RFC3339Nano, time.Local, false},
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
{"2006-01-02", internal.LocalDate, false},
{"15:04:05.999999999", internal.LocalTime, false},
// tomlNext
{"2006-01-02T15:04Z07:00", time.Local, true},
{"2006-01-02T15:04", internal.LocalDatetime, true},
{"15:04", internal.LocalTime, true},
}
func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
@ -318,6 +360,9 @@ func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
err error
)
for _, dt := range dtTypes {
if dt.next && !p.tomlNext {
continue
}
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
if err == nil {
ok = true
@ -325,18 +370,21 @@ func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
p.panicItemf(it, "Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
}
func (p *parser) valueArray(it item) (interface{}, tomlType) {
p.setType(p.currentKey, tomlArray)
p.setType(p.currentKey, tomlArray, it.pos)
// p.setType(p.currentKey, typ)
var (
array []interface{}
types []tomlType
// Initialize to a non-nil empty slice. This makes it consistent with
// how S = [] decodes into a non-nil slice inside something like struct
// { S []string }. See #338
array = []interface{}{}
)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
@ -347,6 +395,13 @@ func (p *parser) valueArray(it item) (interface{}, tomlType) {
val, typ := p.value(it, true)
array = append(array, val)
types = append(types, typ)
// XXX: types isn't used here, we need it to record the accurate type
// information.
//
// Not entirely sure how to best store this; could use "key[0]",
// "key[1]" notation, or maybe store it on the Array type?
_ = types
}
return array, tomlArray
}
@ -373,8 +428,7 @@ func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tom
}
/// Read all key parts.
k := p.next()
p.approxLine = k.line
k := p.nextPos()
var key Key
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
key = append(key, p.keyString(k))
@ -390,11 +444,11 @@ func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tom
for i := range context {
p.addImplicitContext(append(p.context, context[i:i+1]...))
}
p.ordered = append(p.ordered, p.context.add(p.currentKey))
/// Set the value.
val, typ := p.value(p.next(), false)
p.set(p.currentKey, val, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.set(p.currentKey, val, typ, it.pos)
hash[p.currentKey] = val
/// Restore context.
@ -408,7 +462,7 @@ func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tom
// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
// +/- signs, and base prefixes.
func numHasLeadingZero(s string) bool {
if len(s) > 1 && s[0] == '0' && isDigit(rune(s[1])) { // >1 to allow "0" and isDigit to allow 0x
if len(s) > 1 && s[0] == '0' && !(s[1] == 'b' || s[1] == 'o' || s[1] == 'x') { // Allow 0b, 0o, 0x
return true
}
if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
@ -503,7 +557,7 @@ func (p *parser) addContext(key Key, array bool) {
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as an array.", keyContext)
p.panicf("Key '%s' was already created and cannot be used as an array.", key)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
@ -512,9 +566,9 @@ func (p *parser) addContext(key Key, array bool) {
}
// set calls setValue and setType.
func (p *parser) set(key string, val interface{}, typ tomlType) {
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
func (p *parser) set(key string, val interface{}, typ tomlType, pos Position) {
p.setValue(key, val)
p.setType(key, typ, pos)
}
// setValue sets the given key to the given value in the current context.
@ -573,32 +627,33 @@ func (p *parser) setValue(key string, value interface{}) {
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
// setType sets the type of a particular value at a given key. It should be
// called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
func (p *parser) setType(key string, typ tomlType, pos Position) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
keyContext = append(keyContext, p.context...)
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
// Special case to make empty keys ("" = 1) work.
// Without it it will set "" rather than `""`.
// TODO: why is this needed? And why is this only needed here?
if len(keyContext) == 0 {
keyContext = Key{""}
}
p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
}
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = true }
func (p *parser) removeImplicit(key Key) { p.implicits[key.String()] = false }
func (p *parser) isImplicit(key Key) bool { return p.implicits[key.String()] }
func (p *parser) isArray(key Key) bool { return p.types[key.String()] == tomlArray }
func (p *parser) addImplicitContext(key Key) {
p.addImplicit(key)
p.addContext(key, false)
}
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
func (p *parser) addImplicitContext(key Key) { p.addImplicit(key); p.addContext(key, false) }
// current returns the full key name of the current context.
func (p *parser) current() string {
@ -621,49 +676,58 @@ func stripFirstNewline(s string) string {
return s
}
// Remove newlines inside triple-quoted strings if a line ends with "\".
func stripEscapedNewlines(s string) string {
split := strings.Split(s, "\n")
if len(split) < 1 {
return s
}
// stripEscapedNewlines removes whitespace after line-ending backslashes in
// multiline strings.
//
// A line-ending backslash is an unescaped \ followed only by whitespace until
// the next newline. After a line-ending backslash, all whitespace is removed
// until the next non-whitespace character.
func (p *parser) stripEscapedNewlines(s string) string {
var b strings.Builder
var i int
for {
ix := strings.Index(s[i:], `\`)
if ix < 0 {
b.WriteString(s)
return b.String()
}
i += ix
escNL := false // Keep track of the last non-blank line was escaped.
for i, line := range split {
line = strings.TrimRight(line, " \t\r")
if len(line) == 0 || line[len(line)-1] != '\\' {
split[i] = strings.TrimRight(split[i], "\r")
if !escNL && i != len(split)-1 {
split[i] += "\n"
if len(s) > i+1 && s[i+1] == '\\' {
// Escaped backslash.
i += 2
continue
}
// Scan until the next non-whitespace.
j := i + 1
whitespaceLoop:
for ; j < len(s); j++ {
switch s[j] {
case ' ', '\t', '\r', '\n':
default:
break whitespaceLoop
}
}
if j == i+1 {
// Not a whitespace escape.
i++
continue
}
escBS := true
for j := len(line) - 1; j >= 0 && line[j] == '\\'; j-- {
escBS = !escBS
}
if escNL {
line = strings.TrimLeft(line, " \t\r")
}
escNL = !escBS
if escBS {
split[i] += "\n"
if !strings.Contains(s[i:j], "\n") {
// This is not a line-ending backslash.
// (It's a bad escape sequence, but we can let
// replaceEscapes catch it.)
i++
continue
}
split[i] = line[:len(line)-1] // Remove \
if len(split)-1 > i {
split[i+1] = strings.TrimLeft(split[i+1], " \t\r")
}
b.WriteString(s[:i])
s = s[j:]
i = 0
}
return strings.Join(split, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
func (p *parser) replaceEscapes(it item, str string) string {
replaced := make([]rune, 0, len(str))
s := []byte(str)
r := 0
for r < len(s) {
@ -681,10 +745,8 @@ func (p *parser) replaceEscapes(str string) string {
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case ' ', '\t':
p.panicf("invalid escape: '\\%c'", s[r])
return ""
p.panicItemf(it, "invalid escape: '\\%c'", s[r])
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
@ -700,24 +762,35 @@ func (p *parser) replaceEscapes(str string) string {
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case 'e':
if p.tomlNext {
replaced = append(replaced, rune(0x001B))
r += 1
}
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'x':
if p.tomlNext {
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+3])
replaced = append(replaced, escaped)
r += 3
}
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+9])
replaced = append(replaced, escaped)
r += 9
}
@ -725,15 +798,14 @@ func (p *parser) replaceEscapes(str string) string {
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
func (p *parser) asciiEscapeToUnicode(it item, bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
p.bug("Could not parse '%s' as a hexadecimal number, but the lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
p.panicItemf(it, "Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}

View File

@ -1,70 +0,0 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}

View File

@ -70,8 +70,8 @@ func typeFields(t reflect.Type) []field {
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
var count map[reflect.Type]int
var nextCount map[reflect.Type]int
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}

70
vendor/github.com/BurntSushi/toml/type_toml.go generated vendored Normal file
View File

@ -0,0 +1,70 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsTable(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}

View File

@ -1,15 +0,0 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/

View File

@ -1,150 +0,0 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 15m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: false
# list of build tags, all linters use it. Default is empty list.
#build-tags:
# - mytag
# which dirs to skip: they won't be analyzed;
# can use regexp here: generated.*, regexp is applied on full path;
# default value is empty list, but next dirs are always skipped independently
# from this option's value:
# vendor$, third_party$, testdata$, examples$, Godeps$, builtin$
skip-dirs:
- /gen$
# which files to skip: they will be analyzed, but issues from them
# won't be reported. Default value is empty list, but there is
# no need to include all autogenerated files, we confidently recognize
# autogenerated files. If it's not please let us know.
skip-files:
- ".*\\.my\\.go$"
- lib/bad.go
- ".*\\.template\\.go$"
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: false
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: false
govet:
# report about shadowed variables
check-shadowing: true
# Obtain type information from installed (to $GOPATH/pkg) package files:
# golangci-lint will execute `go install -i` and `go test -i` for analyzed packages
# before analyzing them.
# By default this option is disabled and govet gets type information by loader from source code.
# Loading from source code is slow, but it's done only once for all linters.
# Go-installing of packages first time is much slower than loading them from source code,
# therefore this option is disabled by default.
# But repeated installation is fast in go >= 1.10 because of build caching.
# Enable this option only if all conditions are met:
# 1. you use only "fast" linters (--fast e.g.): no program loading occurs
# 2. you use go >= 1.10
# 3. you do repeated runs (false for CI) or cache $GOPATH/pkg or `go env GOCACHE` dir in CI.
use-installed-packages: false
golint:
# minimal confidence for issues, default is 0.8
min-confidence: 0.8
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
gocyclo:
# minimal code complexity to report, 30 by default (but we recommend 10-20)
min-complexity: 10
maligned:
# print struct with more effective memory layout or not, false by default
suggest-new: true
dupl:
# tokens count to trigger issue, 150 by default
threshold: 100
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 3
depguard:
list-type: blacklist
include-go-root: false
packages:
- github.com/davecgh/go-spew/spew
linters:
#enable:
# - staticcheck
# - unused
# - gosimple
enable-all: true
disable:
- lll
disable-all: false
#presets:
# - bugs
# - unused
fast: false
issues:
# List of regexps of issue texts to exclude, empty list by default.
# But independently from this option we use default exclude patterns,
# it can be disabled by `exclude-use-default: false`. To list all
# excluded by default patterns execute `golangci-lint run --help`
exclude:
- "`parseTained` is unused"
- "`parseState` is unused"
# Independently from option `exclude` we use default exclude patterns,
# it can be disabled by this option. To list all
# excluded by default patterns execute `golangci-lint run --help`.
# Default value for this option is false.
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same: 0
# Show only new issues: if there are unstaged changes or untracked files,
# only those changes are analyzed, else only changes in HEAD~ are analyzed.
# It's a super-useful option for integration of golangci-lint into existing
# large codebase. It's not practical to fix all existing issues at the moment
# of integration: much better don't allow issues in new code.
# Default is false.
new: false
# Show only new issues created after git revision `REV`
#new-from-rev: REV
# Show only new issues created in git patch with set file path.
#new-from-patch: path/to/patch/file

View File

@ -1,24 +0,0 @@
language: go
go:
- "1.13"
- "1.14"
- tip
env:
- GO111MODULE=on
before_install:
- go get github.com/axw/gocov/gocov
- go get github.com/mattn/goveralls
- go get golang.org/x/tools/cmd/cover
- go get golang.org/x/tools/cmd/goimports
- wget -O - -q https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh| sh
script:
- test -z "$(goimports -d ./ 2>&1)"
- ./bin/golangci-lint run
- go test -v -race ./...
after_success:
- test "$TRAVIS_GO_VERSION" = "1.14" && goveralls -service=travis-ci

View File

@ -1,21 +0,0 @@
MIT License
Copyright (c) 2020 Djarvur
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,75 +0,0 @@
= err113 image:https://godoc.org/github.com/Djarvur/go-err113?status.svg["GoDoc",link="http://godoc.org/github.com/Djarvur/go-err113"] image:https://travis-ci.org/Djarvur/go-err113.svg["Build Status",link="https://travis-ci.org/Djarvur/go-err113"] image:https://coveralls.io/repos/Djarvur/go-err113/badge.svg?branch=master&service=github["Coverage Status",link="https://coveralls.io/github/Djarvur/go-err113?branch=master"]
Daniel Podolsky
:toc:
Golang linter to check the errors handling expressions
== Details
Starting from Go 1.13 the standard `error` type behaviour was changed: one `error` could be derived from another with `fmt.Errorf()` method using `%w` format specifier.
So the errors hierarchy could be built for flexible and responsible errors processing.
And to make this possible at least two simple rules should be followed:
1. `error` values should not be compared directly but with `errors.Is()` method.
1. `error` should not be created dynamically from scratch but by the wrapping the static (package-level) error.
This linter is checking the code for these 2 rules compliance.
=== Reports
So, `err113` reports every `==` and `!=` comparison for exact `error` type variables except comparison to `nil` and `io.EOF`.
Also, any call of `errors.New()` and `fmt.Errorf()` methods are reported except the calls used to initialise package-level variables and the `fmt.Errorf()` calls wrapping the other errors.
Note: non-standard packages, like `github.com/pkg/errors` are ignored completely.
== Install
```
go get -u github.com/Djarvur/go-err113/cmd/err113
```
== Usage
Defined by link:https://pkg.go.dev/golang.org/x/tools/go/analysis/singlechecker[singlechecker] package.
```
err113: checks the error handling rules according to the Go 1.13 new error type
Usage: err113 [-flag] [package]
Flags:
-V print version and exit
-all
no effect (deprecated)
-c int
display offending line with this many lines of context (default -1)
-cpuprofile string
write CPU profile to this file
-debug string
debug flags, any subset of "fpstv"
-fix
apply all suggested fixes
-flags
print analyzer flags in JSON
-json
emit JSON output
-memprofile string
write memory profile to this file
-source
no effect (deprecated)
-tags string
no effect (deprecated)
-trace string
write trace log to this file
-v no effect (deprecated)
```
== Thanks
To link:https://github.com/quasilyte[Iskander (Alex) Sharipov] for the really useful advices.
To link:https://github.com/jackwhelpton[Jack Whelpton] for the bugfix provided.

View File

@ -1,123 +0,0 @@
package err113
import (
"fmt"
"go/ast"
"go/token"
"go/types"
"golang.org/x/tools/go/analysis"
)
func inspectComparision(pass *analysis.Pass, n ast.Node) bool { // nolint: unparam
// check whether the call expression matches time.Now().Sub()
be, ok := n.(*ast.BinaryExpr)
if !ok {
return true
}
// check if it is a comparison operation
if be.Op != token.EQL && be.Op != token.NEQ {
return true
}
if !areBothErrors(be.X, be.Y, pass.TypesInfo) {
return true
}
oldExpr := render(pass.Fset, be)
negate := ""
if be.Op == token.NEQ {
negate = "!"
}
newExpr := fmt.Sprintf("%s%s.Is(%s, %s)", negate, "errors", rawString(be.X), rawString(be.Y))
pass.Report(
analysis.Diagnostic{
Pos: be.Pos(),
Message: fmt.Sprintf("do not compare errors directly %q, use %q instead", oldExpr, newExpr),
SuggestedFixes: []analysis.SuggestedFix{
{
Message: fmt.Sprintf("should replace %q with %q", oldExpr, newExpr),
TextEdits: []analysis.TextEdit{
{
Pos: be.Pos(),
End: be.End(),
NewText: []byte(newExpr),
},
},
},
},
},
)
return true
}
func isError(v ast.Expr, info *types.Info) bool {
if intf, ok := info.TypeOf(v).Underlying().(*types.Interface); ok {
return intf.NumMethods() == 1 && intf.Method(0).FullName() == "(error).Error"
}
return false
}
func isEOF(ex ast.Expr, info *types.Info) bool {
se, ok := ex.(*ast.SelectorExpr)
if !ok || se.Sel.Name != "EOF" {
return false
}
if ep, ok := asImportedName(se.X, info); !ok || ep != "io" {
return false
}
return true
}
func asImportedName(ex ast.Expr, info *types.Info) (string, bool) {
ei, ok := ex.(*ast.Ident)
if !ok {
return "", false
}
ep, ok := info.ObjectOf(ei).(*types.PkgName)
if !ok {
return "", false
}
return ep.Imported().Path(), true
}
func areBothErrors(x, y ast.Expr, typesInfo *types.Info) bool {
// check that both left and right hand side are not nil
if typesInfo.Types[x].IsNil() || typesInfo.Types[y].IsNil() {
return false
}
// check that both left and right hand side are not io.EOF
if isEOF(x, typesInfo) || isEOF(y, typesInfo) {
return false
}
// check that both left and right hand side are errors
if !isError(x, typesInfo) && !isError(y, typesInfo) {
return false
}
return true
}
func rawString(x ast.Expr) string {
switch t := x.(type) {
case *ast.Ident:
return t.Name
case *ast.SelectorExpr:
return fmt.Sprintf("%s.%s", rawString(t.X), t.Sel.Name)
case *ast.CallExpr:
return fmt.Sprintf("%s()", rawString(t.Fun))
}
return fmt.Sprintf("%s", x)
}

View File

@ -1,74 +0,0 @@
package err113
import (
"go/ast"
"go/types"
"strings"
"golang.org/x/tools/go/analysis"
)
var methods2check = map[string]map[string]func(*ast.CallExpr, *types.Info) bool{ // nolint: gochecknoglobals
"errors": {"New": justTrue},
"fmt": {"Errorf": checkWrap},
}
func justTrue(*ast.CallExpr, *types.Info) bool {
return true
}
func checkWrap(ce *ast.CallExpr, info *types.Info) bool {
return !(len(ce.Args) > 0 && strings.Contains(toString(ce.Args[0], info), `%w`))
}
func inspectDefinition(pass *analysis.Pass, tlds map[*ast.CallExpr]struct{}, n ast.Node) bool { //nolint: unparam
// check whether the call expression matches time.Now().Sub()
ce, ok := n.(*ast.CallExpr)
if !ok {
return true
}
if _, ok = tlds[ce]; ok {
return true
}
fn, ok := ce.Fun.(*ast.SelectorExpr)
if !ok {
return true
}
fxName, ok := asImportedName(fn.X, pass.TypesInfo)
if !ok {
return true
}
methods, ok := methods2check[fxName]
if !ok {
return true
}
checkFunc, ok := methods[fn.Sel.Name]
if !ok {
return true
}
if !checkFunc(ce, pass.TypesInfo) {
return true
}
pass.Reportf(
ce.Pos(),
"do not define dynamic errors, use wrapped static errors instead: %q",
render(pass.Fset, ce),
)
return true
}
func toString(ex ast.Expr, info *types.Info) string {
if tv, ok := info.Types[ex]; ok && tv.Value != nil {
return tv.Value.ExactString()
}
return ""
}

View File

@ -1,90 +0,0 @@
// Package err113 is a Golang linter to check the errors handling expressions
package err113
import (
"bytes"
"go/ast"
"go/printer"
"go/token"
"golang.org/x/tools/go/analysis"
)
// NewAnalyzer creates a new analysis.Analyzer instance tuned to run err113 checks.
func NewAnalyzer() *analysis.Analyzer {
return &analysis.Analyzer{
Name: "err113",
Doc: "checks the error handling rules according to the Go 1.13 new error type",
Run: run,
}
}
func run(pass *analysis.Pass) (interface{}, error) {
for _, file := range pass.Files {
tlds := enumerateFileDecls(file)
ast.Inspect(
file,
func(n ast.Node) bool {
return inspectComparision(pass, n) &&
inspectDefinition(pass, tlds, n)
},
)
}
return nil, nil
}
// render returns the pretty-print of the given node.
func render(fset *token.FileSet, x interface{}) string {
var buf bytes.Buffer
if err := printer.Fprint(&buf, fset, x); err != nil {
panic(err)
}
return buf.String()
}
func enumerateFileDecls(f *ast.File) map[*ast.CallExpr]struct{} {
res := make(map[*ast.CallExpr]struct{})
var ces []*ast.CallExpr // nolint: prealloc
for _, d := range f.Decls {
ces = append(ces, enumerateDeclVars(d)...)
}
for _, ce := range ces {
res[ce] = struct{}{}
}
return res
}
func enumerateDeclVars(d ast.Decl) (res []*ast.CallExpr) {
td, ok := d.(*ast.GenDecl)
if !ok || td.Tok != token.VAR {
return nil
}
for _, s := range td.Specs {
res = append(res, enumerateSpecValues(s)...)
}
return res
}
func enumerateSpecValues(s ast.Spec) (res []*ast.CallExpr) {
vs, ok := s.(*ast.ValueSpec)
if !ok {
return nil
}
for _, v := range vs.Values {
if ce, ok := v.(*ast.CallExpr); ok {
res = append(res, ce)
}
}
return res
}

View File

@ -1,29 +0,0 @@
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12.x
- tip
# Setting sudo access to false will let Travis CI use containers rather than
# VMs to run the tests. For more details see:
# - http://docs.travis-ci.com/user/workers/container-based-infrastructure/
# - http://docs.travis-ci.com/user/workers/standard-infrastructure/
sudo: false
script:
- make setup
- make test
notifications:
webhooks:
urls:
- https://webhooks.gitter.im/e/06e3328629952dabe3e0
on_success: change # options: [always|never|change] default: always
on_failure: always # options: [always|never|change] default: always
on_start: never # options: [always|never|change] default: always

View File

@ -1,109 +0,0 @@
# 1.5.0 (2019-09-11)
## Added
- #103: Add basic fuzzing for `NewVersion()` (thanks @jesse-c)
## Changed
- #82: Clarify wildcard meaning in range constraints and update tests for it (thanks @greysteil)
- #83: Clarify caret operator range for pre-1.0.0 dependencies (thanks @greysteil)
- #72: Adding docs comment pointing to vert for a cli
- #71: Update the docs on pre-release comparator handling
- #89: Test with new go versions (thanks @thedevsaddam)
- #87: Added $ to ValidPrerelease for better validation (thanks @jeremycarroll)
## Fixed
- #78: Fix unchecked error in example code (thanks @ravron)
- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
- #97: Fixed copyright file for proper display on GitHub
- #107: Fix handling prerelease when sorting alphanum and num
- #109: Fixed where Validate sometimes returns wrong message on error
# 1.4.2 (2018-04-10)
## Changed
- #72: Updated the docs to point to vert for a console appliaction
- #71: Update the docs on pre-release comparator handling
## Fixed
- #70: Fix the handling of pre-releases and the 0.0.0 release edge case
# 1.4.1 (2018-04-02)
## Fixed
- Fixed #64: Fix pre-release precedence issue (thanks @uudashr)
# 1.4.0 (2017-10-04)
## Changed
- #61: Update NewVersion to parse ints with a 64bit int size (thanks @zknill)
# 1.3.1 (2017-07-10)
## Fixed
- Fixed #57: number comparisons in prerelease sometimes inaccurate
# 1.3.0 (2017-05-02)
## Added
- #45: Added json (un)marshaling support (thanks @mh-cbon)
- Stability marker. See https://masterminds.github.io/stability/
## Fixed
- #51: Fix handling of single digit tilde constraint (thanks @dgodd)
## Changed
- #55: The godoc icon moved from png to svg
# 1.2.3 (2017-04-03)
## Fixed
- #46: Fixed 0.x.x and 0.0.x in constraints being treated as *
# Release 1.2.2 (2016-12-13)
## Fixed
- #34: Fixed issue where hyphen range was not working with pre-release parsing.
# Release 1.2.1 (2016-11-28)
## Fixed
- #24: Fixed edge case issue where constraint "> 0" does not handle "0.0.1-alpha"
properly.
# Release 1.2.0 (2016-11-04)
## Added
- #20: Added MustParse function for versions (thanks @adamreese)
- #15: Added increment methods on versions (thanks @mh-cbon)
## Fixed
- Issue #21: Per the SemVer spec (section 9) a pre-release is unstable and
might not satisfy the intended compatibility. The change here ignores pre-releases
on constraint checks (e.g., ~ or ^) when a pre-release is not part of the
constraint. For example, `^1.2.3` will ignore pre-releases while
`^1.2.3-alpha` will include them.
# Release 1.1.1 (2016-06-30)
## Changed
- Issue #9: Speed up version comparison performance (thanks @sdboyer)
- Issue #8: Added benchmarks (thanks @sdboyer)
- Updated Go Report Card URL to new location
- Updated Readme to add code snippet formatting (thanks @mh-cbon)
- Updating tagging to v[SemVer] structure for compatibility with other tools.
# Release 1.1.0 (2016-03-11)
- Issue #2: Implemented validation to provide reasons a versions failed a
constraint.
# Release 1.0.1 (2015-12-31)
- Fixed #1: * constraint failing on valid versions.
# Release 1.0.0 (2015-10-20)
- Initial release

View File

@ -1,19 +0,0 @@
Copyright (C) 2014-2019, Matt Butcher and Matt Farina
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -1,36 +0,0 @@
.PHONY: setup
setup:
go get -u gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 --install
.PHONY: test
test: validate lint
@echo "==> Running tests"
go test -v
.PHONY: validate
validate:
@echo "==> Running static validations"
@gometalinter.v1 \
--disable-all \
--enable deadcode \
--severity deadcode:error \
--enable gofmt \
--enable gosimple \
--enable ineffassign \
--enable misspell \
--enable vet \
--tests \
--vendor \
--deadline 60s \
./... || exit_code=1
.PHONY: lint
lint:
@echo "==> Running linters"
@gometalinter.v1 \
--disable-all \
--enable golint \
--vendor \
--deadline 60s \
./... || :

Some files were not shown because too many files have changed in this diff Show More