Compare commits

...

70 Commits

Author SHA1 Message Date
Noémi Ványi 276ffd3f01 Searx is no longer maintained 2023-09-07 18:16:17 +02:00
Grant Lanham Jr 75b859d2a8
Fix quoting issue in search_operator plugin (#3479) 2023-04-05 09:28:58 +02:00
dependabot[bot] 48eb13cf4c
Bump pallets-sphinx-themes from 2.0.2 to 2.0.3 (#3450)
Bumps [pallets-sphinx-themes](https://github.com/pallets/pallets-sphinx-themes) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/pallets/pallets-sphinx-themes/releases)
- [Changelog](https://github.com/pallets/pallets-sphinx-themes/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/pallets-sphinx-themes/compare/2.0.2...2.0.3)

---
updated-dependencies:
- dependency-name: pallets-sphinx-themes
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 22:52:11 +02:00
searx-bot cfec62eb7c
Update searx.data - update_wikidata_units.py (#3454)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-04-04 22:51:50 +02:00
dependabot[bot] 7c70c02220
Bump selenium from 4.7.2 to 4.8.3 (#3490)
Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 4.7.2 to 4.8.3.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](https://github.com/SeleniumHQ/Selenium/commits)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-04-04 22:51:39 +02:00
Brett Kosinski 2fc1cd3a15
Fix regression is retrieving sc code (#3453)
I broke the code when I fixed this.  The old codepath did some trimming
of the sc value and I didn't kill that line, so the value was being
clipped.

This fixes #3430 and is confirmed working on a live instance.
2023-04-04 21:29:17 +02:00
searx-bot 5e658ef276
Update searx.data - update_ahmia_blacklist.py (#3478)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-04-04 21:27:49 +02:00
Grant Lanham Jr 7c6a926648
Use packaging instead of distutils (#3472)
Co-authored-by: Noémi Ványi <kvch@users.noreply.github.com>
2023-04-04 21:27:33 +02:00
br4nnigan 38606234a8
fix bing results sometimes still using bing redirect urls (#3482) 2023-04-04 21:26:44 +02:00
searx-bot eb39a846f3
Update searx.data - update_currencies.py (#3455)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-04-04 21:23:01 +02:00
searx-bot b15dfe0ede
Update searx.data - update_firefox_version.py (#3456)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-04-04 21:22:50 +02:00
Dr. Rolf Jansen 9ba072bb74
fix duckduckgo engine (#3486)
Co-authored-by: rolf <rolf@>
2023-04-04 21:18:04 +02:00
wibyweb de0fde4ec2
Update settings.yml to add pagination to Wilby (#3465) 2023-04-04 21:17:05 +02:00
Grant Lanham Jr 67c233d0c3
add "Accept" header to bing.py (#3473) 2023-04-04 21:16:04 +02:00
ganeshlab 2ec47dce5e
Fix google engine (#3489)
This issue popped up again and part of the fix was in 6f9e678346.
2023-04-04 21:12:46 +02:00
Émilien Devos (perso) 8e943d858f
Reword the TLDR as it is misleading (#3477) 2023-04-04 21:12:21 +02:00
Noémi Ványi 6ab43d1045
Skip problematic step when installing env (#3491) 2023-04-04 21:11:49 +02:00
dependabot[bot] c647b55eb0
Bump mock from 4.0.3 to 5.0.1 (#3445)
Bumps [mock](https://github.com/testing-cabal/mock) from 4.0.3 to 5.0.1.
- [Release notes](https://github.com/testing-cabal/mock/releases)
- [Changelog](https://github.com/testing-cabal/mock/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/testing-cabal/mock/compare/4.0.3...5.0.1)

---
updated-dependencies:
- dependency-name: mock
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-26 21:45:11 +01:00
dependabot[bot] 8fc9ca3f01
Bump splinter from 0.18.1 to 0.19.0 (#3447)
Bumps [splinter](https://github.com/cobrateam/splinter) from 0.18.1 to 0.19.0.
- [Release notes](https://github.com/cobrateam/splinter/releases)
- [Changelog](https://github.com/cobrateam/splinter/blob/master/docs/news.rst)
- [Commits](https://github.com/cobrateam/splinter/compare/0.18.1...0.19.0)

---
updated-dependencies:
- dependency-name: splinter
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-26 21:44:59 +01:00
dependabot[bot] ffc8ce4a51
Bump requests[socks] from 2.28.1 to 2.28.2 (#3448)
Bumps [requests[socks]](https://github.com/psf/requests) from 2.28.1 to 2.28.2.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.28.1...v2.28.2)

---
updated-dependencies:
- dependency-name: requests[socks]
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-26 21:44:36 +01:00
searx-bot f6bafab8c4
Update searx.data - update_ahmia_blacklist.py (#3440)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-01-14 17:46:01 +01:00
searx-bot 07240a8109
Update searx.data - update_firefox_version.py (#3439)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2023-01-14 17:45:55 +01:00
dependabot[bot] 2612204876
Bump certifi from 2022.9.24 to 2022.12.7 (#3438)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.9.24 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.09.24...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-14 17:45:46 +01:00
dependabot[bot] bee9ff29e6
Bump lxml from 4.9.1 to 4.9.2 (#3436)
Bumps [lxml](https://github.com/lxml/lxml) from 4.9.1 to 4.9.2.
- [Release notes](https://github.com/lxml/lxml/releases)
- [Changelog](https://github.com/lxml/lxml/blob/master/CHANGES.txt)
- [Commits](https://github.com/lxml/lxml/compare/lxml-4.9.1...lxml-4.9.2)

---
updated-dependencies:
- dependency-name: lxml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-14 17:45:38 +01:00
dependabot[bot] cda72d29fc
Bump linuxdoc from 20221025 to 20221127 (#3421)
Bumps [linuxdoc](https://github.com/return42/linuxdoc) from 20221025 to 20221127.
- [Release notes](https://github.com/return42/linuxdoc/releases)
- [Commits](https://github.com/return42/linuxdoc/commits)

---
updated-dependencies:
- dependency-name: linuxdoc
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-14 17:45:18 +01:00
Dr. Rolf Jansen a79e8194d7
DuckDuckGo fixes (#3444)
Co-authored-by: rolf <rolf@>
2023-01-14 17:42:57 +01:00
dependabot[bot] 52a21d1192
Bump pycodestyle from 2.9.1 to 2.10.0 (#3417)
Bumps [pycodestyle](https://github.com/PyCQA/pycodestyle) from 2.9.1 to 2.10.0.
- [Release notes](https://github.com/PyCQA/pycodestyle/releases)
- [Changelog](https://github.com/PyCQA/pycodestyle/blob/main/CHANGES.txt)
- [Commits](https://github.com/PyCQA/pycodestyle/compare/2.9.1...2.10.0)

---
updated-dependencies:
- dependency-name: pycodestyle
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-29 21:51:08 +01:00
dependabot[bot] 5af8f4f563
Bump selenium from 4.5.0 to 4.7.2 (#3429)
Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 4.5.0 to 4.7.2.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](https://github.com/SeleniumHQ/Selenium/commits)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-29 21:50:57 +01:00
dependabot[bot] b22cd73940
Bump pylint from 2.15.5 to 2.15.9 (#3433)
Bumps [pylint](https://github.com/PyCQA/pylint) from 2.15.5 to 2.15.9.
- [Release notes](https://github.com/PyCQA/pylint/releases)
- [Commits](https://github.com/PyCQA/pylint/compare/v2.15.5...v2.15.9)

---
updated-dependencies:
- dependency-name: pylint
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-29 21:30:27 +01:00
dependabot[bot] 03b3ad81ec
Bump twine from 4.0.1 to 4.0.2 (#3432)
Bumps [twine](https://github.com/pypa/twine) from 4.0.1 to 4.0.2.
- [Release notes](https://github.com/pypa/twine/releases)
- [Changelog](https://github.com/pypa/twine/blob/main/docs/changelog.rst)
- [Commits](https://github.com/pypa/twine/compare/4.0.1...4.0.2)

---
updated-dependencies:
- dependency-name: twine
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-12-29 21:30:19 +01:00
ebCrypto d993da3a7f
fixed typo in readme (#3424)
missing word `with` in `it comes [...] challenges`
2022-12-29 21:29:23 +01:00
ebCrypto ee231637a2
fixed typo in PR template (#3425)
`reviewer` misspelled
2022-12-29 21:28:48 +01:00
dependabot[bot] 117dbd462f
Bump linuxdoc from 20211220 to 20221025 (#3395)
Bumps [linuxdoc](https://github.com/return42/linuxdoc) from 20211220 to 20221025.
- [Release notes](https://github.com/return42/linuxdoc/releases)
- [Commits](https://github.com/return42/linuxdoc/commits)

---
updated-dependencies:
- dependency-name: linuxdoc
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-06 13:06:14 +01:00
searx-bot 4d9586e2b6
Update searx.data - update_ahmia_blacklist.py (#3398)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-11-06 13:06:02 +01:00
searx-bot f05572e380
Update searx.data - update_wikidata_units.py (#3399)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-11-06 13:05:57 +01:00
searx-bot 6f15b6b477
Update searx.data - update_external_bangs.py (#3400)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-11-06 13:05:45 +01:00
searx-bot 8e2761dcba
Update searx.data - update_currencies.py (#3401)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-11-06 13:05:41 +01:00
searx-bot f365e1f683
Update searx.data - update_firefox_version.py (#3402)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-11-06 13:05:16 +01:00
dependabot[bot] 806bd8045e
Bump babel from 2.10.3 to 2.11.0 (#3404)
Bumps [babel](https://github.com/python-babel/babel) from 2.10.3 to 2.11.0.
- [Release notes](https://github.com/python-babel/babel/releases)
- [Changelog](https://github.com/python-babel/babel/blob/master/CHANGES.rst)
- [Commits](https://github.com/python-babel/babel/compare/v2.10.3...v2.11.0)

---
updated-dependencies:
- dependency-name: babel
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-04 09:13:10 +01:00
Brett Kosinski 3c84af95ba
Fix scraping of 'sc' value from homepage (#3397)
Looking at the current HTML for the Startpage front page, the previous
footer logo element is no longer present.  This change scrapes the "sc"
parameter from one of the hidden HTML form elements, which should
(hopefully) be a bit more stable long term, since that form is used by
Startpage to submit requests to the engine.
2022-10-31 22:34:43 +01:00
searx-bot a9a6c58d26
Update searx.data - update_wikidata_units.py (#3373)
Co-authored-by: dalf <dalf@users.noreply.github.com>
Co-authored-by: Noémi Ványi <kvch@users.noreply.github.com>
2022-10-31 22:30:53 +01:00
dependabot[bot] 915bc3ad58
Bump pylint from 2.15.0 to 2.15.5 (#3394)
Bumps [pylint](https://github.com/PyCQA/pylint) from 2.15.0 to 2.15.5.
- [Release notes](https://github.com/PyCQA/pylint/releases)
- [Commits](https://github.com/PyCQA/pylint/compare/v2.15.0...v2.15.5)

---
updated-dependencies:
- dependency-name: pylint
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-31 22:29:12 +01:00
dependabot[bot] 05d8bce379
Bump sphinx from 5.2.2 to 5.3.0 (#3384)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 5.2.2 to 5.3.0.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v5.2.2...v5.3.0)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-31 22:29:00 +01:00
searx-bot bf0a583f4b
Update searx.data - update_firefox_version.py (#3371)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-10-03 14:59:20 +02:00
searx-bot a8810f4813
Update searx.data - update_ahmia_blacklist.py (#3372)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-10-03 14:59:03 +02:00
searx-bot c8c922cad4
Update searx.data - update_currencies.py (#3375)
Co-authored-by: dalf <dalf@users.noreply.github.com>
2022-10-03 14:58:29 +02:00
dependabot[bot] 5d6fe4f332
Bump selenium from 4.4.3 to 4.5.0 (#3368)
Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 4.4.3 to 4.5.0.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](https://github.com/SeleniumHQ/Selenium/commits/selenium-4.5.0)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-30 23:28:16 +02:00
dependabot[bot] 4e073bd708
Bump certifi from 2022.9.14 to 2022.9.24 (#3369)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.9.14 to 2022.9.24.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.09.14...2022.09.24)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-30 23:28:07 +02:00
dependabot[bot] 05977f3221
Bump sphinx from 5.1.1 to 5.2.2 (#3370)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 5.1.1 to 5.2.2.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/5.x/CHANGES)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v5.1.1...v5.2.2)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-30 23:27:57 +02:00
Noémi Ványi a1e2c501d2 add blogpost about private searx and fix uwsgi installation guide 2022-09-30 23:17:14 +02:00
Kian-Meng Ang 629ebb426f
Fix typos (#3366)
Found via `codespell -S ./searx/translations,./searx/data,./searx/static -L ans,te,fo,doubleclick,tthe,dum`
2022-09-29 23:06:59 +02:00
Noémi Ványi 57e7e3bbf6 fix issue reported by linter 2022-09-29 22:58:43 +02:00
Noémi Ványi 539e1a873e Add documentation about offline engines 2022-09-29 22:55:03 +02:00
Adam Tauber 31eef5b9db
Merge pull request #3367 from br4nnigan/master
allow engines to override pretty_url and use this in bing to show mea…
2022-09-29 20:50:33 +00:00
br4nnigan a9dadda6f7 allow engines to override pretty_url and use this in bing to show meaningful urls 2022-09-28 20:49:51 +02:00
Adam Tauber 2222caec22 [enh] add omnom engine 2022-09-20 23:04:25 +02:00
Markus Heiser 1abecbc835 [fix] google - simplify XPath selectors to fetch more results
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
2022-09-18 19:47:09 +02:00
Émilien Devos d656b340ee output format protobuf to HTML for google mobile 2022-09-18 19:44:48 +02:00
Rob 9e8995e13d
[fix] brave engine: no search results (#3361)
There were never any results from the Brave search engine
so fixed url_xpath and now Brave search results are working.
2022-09-18 19:32:23 +02:00
dependabot[bot] d471c4a3f4
Bump certifi from 2022.6.15 to 2022.9.14 (#3363)
Bumps [certifi](https://github.com/certifi/python-certifi) from 2022.6.15 to 2022.9.14.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2022.06.15...2022.09.14)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-18 19:20:14 +02:00
Elena Poelman ca27e91594
New search engine: IPFS search (#3218)
* Feat: initial support for the ipfs-search engine

* Feat: enable paging for ipfs search

* Make ipfs-search code more readable

* added support for images, music and video to the ipfs search engine

* FIX: redefined some variables that where redefining built-ins

* adjust code so it works on older python versions

* Feat: add support for time ranges
2022-09-07 22:13:19 +02:00
dependabot[bot] d86cb95560
Bump pylint from 2.14.5 to 2.15.0 (#3353)
* Bump pylint from 2.14.5 to 2.15.0

Bumps [pylint](https://github.com/PyCQA/pylint) from 2.14.5 to 2.15.0.
- [Release notes](https://github.com/PyCQA/pylint/releases)
- [Commits](https://github.com/PyCQA/pylint/compare/v2.14.5...v2.15.0)

---
updated-dependencies:
- dependency-name: pylint
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix code

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Noémi Ványi <sitbackandwait@gmail.com>
2022-09-06 22:23:34 +02:00
Noémi Ványi 6d40961682 Fix Portugese (Brazil) translation
Closes #3348
2022-08-23 22:19:09 +02:00
dependabot[bot] 7bc0f3cc89
Bump selenium from 4.4.0 to 4.4.3 (#3346)
Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 4.4.0 to 4.4.3.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](https://github.com/SeleniumHQ/Selenium/commits)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-20 22:29:58 +02:00
Noémi Ványi d004439646 Update documentation to fix blog link
Closes #3342
2022-08-15 22:54:23 +02:00
dependabot[bot] 319a24317e
Bump selenium from 4.3.0 to 4.4.0 (#3339)
Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 4.3.0 to 4.4.0.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](https://github.com/SeleniumHQ/Selenium/compare/selenium-4.3.0...selenium-4.4.0)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-13 12:06:11 +02:00
dependabot[bot] abc877d86e
Bump setproctitle from 1.3.1 to 1.3.2 (#3338)
Bumps [setproctitle](https://github.com/dvarrazzo/py-setproctitle) from 1.3.1 to 1.3.2.
- [Release notes](https://github.com/dvarrazzo/py-setproctitle/releases)
- [Changelog](https://github.com/dvarrazzo/py-setproctitle/blob/master/HISTORY.rst)
- [Commits](https://github.com/dvarrazzo/py-setproctitle/commits)

---
updated-dependencies:
- dependency-name: setproctitle
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-13 12:06:00 +02:00
dependabot[bot] 245334c1ab
Bump flask from 2.2.1 to 2.2.2 (#3337)
Bumps [flask](https://github.com/pallets/flask) from 2.2.1 to 2.2.2.
- [Release notes](https://github.com/pallets/flask/releases)
- [Changelog](https://github.com/pallets/flask/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/flask/compare/2.2.1...2.2.2)

---
updated-dependencies:
- dependency-name: flask
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-08-13 12:05:43 +02:00
Morten Lautrup f199100e40
Fix typo in utils/searx.sh (#3331) 2022-08-08 20:19:42 +02:00
Noémi Ványi c1a611c6b9 Fix command to get git version 2022-08-07 18:20:22 +02:00
86 changed files with 20156 additions and 29149 deletions

View File

@ -424,7 +424,7 @@ Special thanks to `NLNet <https://nlnet.nl>`__ for sponsoring multiple features
- Removed engines: faroo
Special thanks to `NLNet <https://nlnet.nl>`__ for sponsoring multiple features of this release.
Special thanks to https://www.accessibility.nl/english for making accessibilty audit.
Special thanks to https://www.accessibility.nl/english for making accessibility audit.
News
~~~~

View File

@ -16,7 +16,7 @@
## Author's checklist
<!-- additional notes for reviewiers -->
<!-- additional notes for reviewers -->
## Related issues

View File

@ -1,5 +1,7 @@
.. SPDX-License-Identifier: AGPL-3.0-or-later
Searx is no longer maintained. Thank you for your support and all your contributions.
.. figure:: https://raw.githubusercontent.com/searx/searx/master/searx/static/themes/oscar/img/logo_searx_a.png
:target: https://searx.github.io/searx/
:alt: searX
@ -73,28 +75,21 @@ Frequently asked questions
Is searx in maintenance mode?
#############################
No, searx is accepting new features, including new engines. We are also adding
engine fixes or other bug fixes when needed. Also, keep in mind that searx is
maintained by volunteers who work in their free time. So some changes might take
some time to be merged.
We reject features that might violate the privacy of users. If you really want
such a feature, it must be disabled by default and warn users about the consequances
of turning it off.
No, searx is no longer maintained.
What is the difference between searx and SearxNG?
#################################################
TL;DR: If you want to run a public instance, go with SearxNG. If you want to
self host your own instance, choose searx.
TL;DR: SearXNG is for users that want more features and bugs getting fixed quicker.
If you prefer a minimalist software and stable experience, use searx.
SearxNG is a fork of searx, created by a former maintainer of searx. The fork
was created because the majority of the maintainers at the time did not find
the new proposed features privacy respecting enough. The most significant issue is with
engine metrics.
Searx is built for privacy conscious users. It comes a unique set of
challanges. One of the problems we face is that users rather not report bugs,
Searx is built for privacy conscious users. It comes with a unique set of
challenges. One of the problems we face is that users rather not report bugs,
because they do not want to publicly share what engines they use or what search
query triggered a problem. It is a challenge we accepted.
@ -124,8 +119,8 @@ instances locally, instead of using public instances.
Why should I use SearxNG?
#########################
SearxNG has rolling releases, depencencies updated more frequently, and engines are fixed
SearxNG has rolling releases, dependencies updated more frequently, and engines are fixed
faster. It is easy to set up your own public instance, and monitor its
perfomance and metrics. It is simple to maintain as an instance adminstrator.
performance and metrics. It is simple to maintain as an instance administrator.
As a user, it provides a prettier user interface and nicer experience.

View File

@ -100,7 +100,7 @@ update_conf() {
# There is a new version
if [ $FORCE_CONF_UPDATE -ne 0 ]; then
# Replace the current configuration
printf '⚠️ Automaticaly update %s to the new version\n' "${CONF}"
printf '⚠️ Automatically update %s to the new version\n' "${CONF}"
if [ ! -f "${OLD_CONF}" ]; then
printf 'The previous configuration is saved to %s\n' "${OLD_CONF}"
mv "${CONF}" "${OLD_CONF}"

View File

@ -9,7 +9,7 @@ workers = 4
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpretor config
# Plugin to use and interpreter config
single-interpreter = true
master = true
plugin = python3

View File

@ -0,0 +1,129 @@
=====================================
Run shell commands from your instance
=====================================
Command line engines are custom engines that run commands in the shell of the
host. In this article you can learn how to create a command engine and how to
customize the result display.
The command
===========
When specifyng commands, you must make sure the commands are available on the
searx host. Searx will not install anything for you. Also, make sure that the
``searx`` user on your host is allowed to run the selected command and has
access to the required files.
Access control
==============
Be careful when creating command engines if you are running a public
instance. Do not expose any sensitive information. You can restrict access by
configuring a list of access tokens under tokens in your ``settings.yml``.
Available settings
==================
* ``command``: A comma separated list of the elements of the command. A special
token ``{{QUERY}}`` tells searx where to put the search terms of the
user. Example: ``['ls', '-l', '-h', '{{QUERY}}']``
* ``query_type``: The expected type of user search terms. Possible values:
``path`` and ``enum``. ``path`` checks if the uesr provided path is inside the
working directory. If not the query is not executed. ``enum`` is a list of
allowed search terms. If the user submits something which is not included in
the list, the query returns an error.
* ``delimiter``: A dict containing a delimiter char and the "titles" of each
element in keys.
* ``parse_regex``: A dict containing the regular expressions for each result
key.
* ``query_enum``: A list containing allowed search terms if ``query_type`` is
set to ``enum``.
* ``working_dir``: The directory where the command has to be executed. Default:
``.``
* ``result_separator``: The character that separates results. Default: ``\n``
Customize the result template
=============================
There is a default result template for displaying key-value pairs coming from
command engines. If you want something more tailored to your result types, you
can design your own template.
Searx relies on `Jinja2 <https://jinja.palletsprojects.com/>`_ for
templating. If you are familiar with Jinja, you will not have any issues
creating templates. You can access the result attributes with ``{{
result.attribute_name }}``.
In the example below the result has two attributes: ``header`` and ``content``.
To customize their diplay, you need the following template (you must define
these classes yourself):
.. code:: html
<div class="result">
<div class="result-header">
{{ result.header }}
</div>
<div class="result-content">
{{ result.content }}
</div>
</div>
Then put your template under ``searx/templates/{theme-name}/result_templates``
named ``your-template-name.html``. You can select your custom template with the
option ``result_template``.
.. code:: yaml
- name: your engine name
engine: command
result_template: your-template-name.html
Examples
========
Find files by name
------------------
The first example is to find files on your searx host. It uses the command
`find` available on most Linux distributions. It expects a path type query. The
path in the search request must be inside the ``working_dir``.
The results are displayed with the default `key-value.html` template. A result
is displayed in a single row table with the key "line".
.. code:: yaml
- name : find
engine : command
command : ['find', '.', '-name', '{{QUERY}}']
query_type : path
shortcut : fnd
tokens : []
disabled : True
delimiter :
chars : ' '
keys : ['line']
Find files by contents
-----------------------
In the second example, we define an engine that searches in the contents of the
files under the ``working_dir``. The search type is not defined, so the user can
input any string they want. To restrict the input, you can set the ``query_type``
to ``enum`` and only allow a set of search terms to protect
yourself. Alternatively, make the engine private, so no one malevolent accesses
the engine.
.. code:: yaml
- name : regex search in files
engine : command
command : ['grep', '{{QUERY}}']
shortcut : gr
tokens : []
disabled : True
delimiter :
chars : ' '
keys : ['line']

View File

@ -37,7 +37,7 @@ Disabled **D** Engine type **ET**
------------- ----------- -------------------- ------------
Safe search **SS**
------------- ----------- ---------------------------------
Weigth **W**
Weight **W**
------------- ----------- ---------------------------------
Disabled **D**
------------- ----------- ---------------------------------
@ -86,3 +86,60 @@ Show errors **DE**
{% endfor %}
.. flat-table:: Additional engines (commented out in settings.yml)
:header-rows: 1
:stub-columns: 2
* - Name
- Base URL
- Host
- Port
- Paging
* - elasticsearch
- localhost:9200
-
-
- False
* - meilicsearch
- localhost:7700
-
-
- True
* - mongodb
-
- 127.0.0.1
- 21017
- True
* - mysql_server
-
- 127.0.0.1
- 3306
- True
* - postgresql
-
- 127.0.0.1
- 5432
- True
* - redis_server
-
- 127.0.0.1
- 6379
- False
* - solr
- localhost:8983
-
-
- True
* - sqlite
-
-
-
- True

View File

@ -39,7 +39,7 @@ Example
Scenario:
#. Recoll indexes a local filesystem mounted in ``/export/documents/reference``,
#. the Recoll search inteface can be reached at https://recoll.example.org/ and
#. the Recoll search interface can be reached at https://recoll.example.org/ and
#. the contents of this filesystem can be reached though https://download.example.org/reference
.. code:: yaml

View File

@ -19,5 +19,9 @@ Administrator documentation
filtron
morty
engines
private-engines
command-engine
indexer-engines
no-sql-engines
plugins
buildhosts

View File

@ -0,0 +1,89 @@
==================
Search in indexers
==================
Searx supports three popular indexer search engines:
* Elasticsearch
* Meilisearch
* Solr
Elasticsearch
=============
Make sure that the Elasticsearch user has access to the index you are querying.
If you are not using TLS during your connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : elasticsearch
shortcut : es
engine : elasticsearch
base_url : http://localhost:9200
username : elastic
password : changeme
index : my-index
query_type : match
enable_http : True
Available settings
------------------
* ``base_url``: URL of Elasticsearch instance. By default it is set to ``http://localhost:9200``.
* ``index``: Name of the index to query. Required.
* ``query_type``: Elasticsearch query method to use. Available: ``match``,
``simple_query_string``, ``term``, ``terms``, ``custom``.
* ``custom_query_json``: If you selected ``custom`` for ``query_type``, you must
provide the JSON payload in this option.
* ``username``: Username in Elasticsearch
* ``password``: Password for the Elasticsearch user
Meilisearch
===========
If you are not using TLS during connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : meilisearch
engine : meilisearch
shortcut: mes
base_url : http://localhost:7700
index : my-index
enable_http: True
Available settings
------------------
* ``base_url``: URL of the Meilisearch instance. By default it is set to http://localhost:7700
* ``index``: Name of the index to query. Required.
* ``auth_key``: Key required for authentication.
* ``facet_filters``: List of facets to search in.
Solr
====
If you are not using TLS during connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : solr
engine : solr
shortcut : slr
base_url : http://localhost:8983
collection : my-collection
sort : asc
enable_http : True
Available settings
------------------
* ``base_url``: URL of the Meilisearch instance. By default it is set to http://localhost:8983
* ``collection``: Name of the collection to query. Required.
* ``sort``: Sorting of the results. Available: ``asc``, ``desc``.
* ``rows``: Maximum number of results from a query. Default value: 10.
* ``field_list``: List of fields returned from the query.
* ``default_fields``: Default fields to query.
* ``query_fields``: List of fields with a boost factor. The bigger the boost
factor of a field, the more important the field is in the query. Example:
``qf="field1^2.3 field2"``

View File

@ -94,8 +94,8 @@ My experience is, that this command is a bit buggy.
.. _uwsgi configuration:
Alltogether
===========
All together
============
Create the configuration ini-file according to your distribution (see below) and
restart the uwsgi application.

View File

@ -0,0 +1,170 @@
===========================
Query SQL and NoSQL servers
===========================
SQL
===
SQL servers are traditional databases with predefined data schema. Furthermore,
modern versions also support BLOB data.
You can search in the following servers:
* `PostgreSQL`_
* `MySQL`_
* `SQLite`_
The configuration of the new database engines are similar. You must put a valid
SELECT SQL query in ``query_str``. At the moment you can only bind at most
one parameter in your query.
Do not include LIMIT or OFFSET in your SQL query as the engines
rely on these keywords during paging.
PostgreSQL
----------
Required PyPi package: ``psychopg2``
You can find an example configuration below:
.. code:: yaml
- name : postgresql
engine : postgresql
database : my_database
username : searx
password : password
query_str : 'SELECT * from my_table WHERE my_column = %(query)s'
shortcut : psql
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running PostgreSQL. By default it is ``127.0.0.1``.
* ``port``: Port number PostgreSQL is listening on. By default it is ``5432``.
* ``database``: Name of the database you are connecting to.
* ``username``: Name of the user connecting to the database.
* ``password``: Password of the database user.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
MySQL
-----
Required PyPi package: ``mysql-connector-python``
This is an example configuration for quering a MySQL server:
.. code:: yaml
- name : mysql
engine : mysql_server
database : my_database
username : searx
password : password
limit : 5
query_str : 'SELECT * from my_table WHERE my_column=%(query)s'
shortcut : mysql
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running MySQL. By default it is ``127.0.0.1``.
* ``port``: Port number MySQL is listening on. By default it is ``3306``.
* ``database``: Name of the database you are connecting to.
* ``auth_plugin``: Authentication plugin to use. By default it is ``caching_sha2_password``.
* ``username``: Name of the user connecting to the database.
* ``password``: Password of the database user.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
SQLite
------
You can read from your database ``my_database`` using this example configuration:
.. code:: yaml
- name : sqlite
engine : sqlite
shortcut: sq
database : my_database
query_str : 'SELECT * FROM my_table WHERE my_column=:query'
Available options
~~~~~~~~~~~~~~~~~
* ``database``: Name of the database you are connecting to.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
NoSQL
=====
NoSQL data stores are used for storing arbitrary data without first defining their
structure. To query the supported servers, you must install their drivers using PyPi.
You can search in the following servers:
* `Redis`_
* `MongoDB`_
Redis
-----
Reqired PyPi package: ``redis``
Example configuration:
.. code:: yaml
- name : mystore
engine : redis_server
exact_match_only : True
host : 127.0.0.1
port : 6379
password : secret-password
db : 0
shortcut : rds
enable_http : True
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running Redis. By default it is ``127.0.0.1``.
* ``port``: Port number Redis is listening on. By default it is ``6379``.
* ``password``: Password if required by Redis.
* ``db``: Number of the database you are connecting to.
* ``exact_match_only``: Enable if you need exact matching. By default it is ``True``.
MongoDB
-------
Required PyPi package: ``pymongo``
Below is an example configuration for using a MongoDB collection:
.. code:: yaml
- name : mymongo
engine : mongodb
shortcut : icm
host : '127.0.0.1'
port : 27017
database : personal
collection : income
key : month
enable_http: True
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running MongoDB. By default it is ``127.0.0.1``.
* ``port``: Port number MongoDB is listening on. By default it is ``27017``.
* ``password``: Password if required by Redis.
* ``database``: Name of the database you are connecting to.
* ``collection``: Name of the collection you want to search in.
* ``exact_match_only``: Enable if you need exact matching. By default it is ``True``.

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

View File

@ -0,0 +1,44 @@
=============================
How to create private engines
=============================
If you are running your public searx instance, you might want to restrict access
to some engines. Maybe you are afraid of bots might abusing the engine. Or the
engine might return private results you do not want to share with strangers.
Server side configuration
=========================
You can make any engine private by setting a list of tokens in your settings.yml
file. In the following example, we set two different tokens that provide access
to the engine.
.. code:: yaml
- name: my-private-google
engine: google
shortcut: pgo
tokens: ['my-secret-token-1', 'my-secret-token-2']
To access the private engine, you must distribute the tokens to your searx
users. It is up to you how you let them know what the access token is you
created.
Client side configuration
=========================
As a searx instance user, you can add any number of access tokens on the
Preferences page. You have to set a comma separated lists of strings in "Engine
tokens" input, then save your new preferences.
.. image:: prefernces-private.png
:width: 600px
:align: center
:alt: location of token textarea
Once the Preferences page is loaded again, you can see the information of the
private engines you got access to. If you cannot see the expected engines in the
engines list, double check your token. If there is no issue with the token,
contact your instance administrator.

View File

@ -129,7 +129,7 @@ Global Settings
outgoing: # communication with search engines
request_timeout : 2.0 # default timeout in seconds, can be override by engine
# max_request_timeout: 10.0 # the maximum timeout in seconds
useragent_suffix : "" # informations like an email address to the administrator
useragent_suffix : "" # information like an email address to the administrator
pool_connections : 100 # Number of different hosts
pool_maxsize : 10 # Number of simultaneous requests by host
# uncomment below section if you want to use a proxy

View File

@ -0,0 +1,48 @@
=================================
Private searx project is finished
=================================
We are officially finished with the Private searx project. The goal was to
extend searx capabilities beyond just searching on the Internet. We added
support for offline engines. These engines do not connect to the Internet,
they find results locally.
As some of the offline engines run commands on the searx host, we added an
option to protect any engine by making them private. Private engines can only be
accessed using a token.
After searx was prepared to run offline queries we added numerous new engines:
1. Command line engine
2. MySQL
3. PostgreSQL
4. SQLite
5. Redis
6. MongoDB
We also added new engines that communicate over HTTP, but you might want to keep
them private:
1. Elasticsearch
2. Meilisearch
3. Solr
The last step was to document this work. We added new tutorials on creating
command engines, making engines private and also adding a custom result template
to your own engines.
Acknowledgement
===============
The project was sponsored by `Search and Discovery Fund`_ of `NLnet
Foundation`_. We would like to thank the NLnet for not only the funds, but the
conversations and their ideas. They were truly invested and passionate about
supporting searx.
.. _Search and Discovery Fund: https://nlnet.nl/discovery
.. _NLnet Foundation: https://nlnet.nl/
| Happy hacking.
| kvch // 2022.09.30 23:15

View File

@ -15,3 +15,4 @@ Blog
search-indexer-engines
sql-engines
search-database-engines
documentation-offline-engines

View File

@ -207,7 +207,7 @@ debug services from filtron and morty analogous use:
Another point we have to notice is that each service (:ref:`searx <searx.sh>`,
:ref:`filtron <filtron.sh>` and :ref:`morty <morty.sh>`) runs under dedicated
system user account with the same name (compare :ref:`create searx user`). To
get a shell from theses accounts, simply call one of the scripts:
get a shell from these accounts, simply call one of the scripts:
.. tabs::
@ -311,7 +311,7 @@ of the container:
Now we can develop as usual in the working tree of our desktop system. Every
time the software was changed, you have to restart the searx service (in the
conatiner):
container):
.. tabs::
@ -370,7 +370,7 @@ We build up a fully functional searx suite in a archlinux container:
$ sudo -H ./utils/lxc.sh install suite searx-archlinux
To access HTTP from the desktop we installed nginx for the services inside the
conatiner:
container:
.. tabs::

View File

@ -16,7 +16,7 @@ you can use your owm template by placing the template under
``searx/templates/{theme_name}/result_templates/{template_name}`` and setting
``result_template`` attribute to ``{template_name}``.
Futhermore, if you do not want to expose these engines on a public instance, you can
Furthermore, if you do not want to expose these engines on a public instance, you can
still add them and limit the access by setting ``tokens`` as described in the `blog post about
private engines`_.
@ -29,7 +29,7 @@ structure.
Redis
-----
Reqired package: ``redis``
Required package: ``redis``
Redis is a key value based data store usually stored in memory.

View File

@ -15,7 +15,7 @@ All of the engines above are added to ``settings.yml`` just commented out, as yo
Please note that if you are not using HTTPS to access these engines, you have to enable
HTTP requests by setting ``enable_http`` to ``True``.
Futhermore, if you do not want to expose these engines on a public instance, you can
Furthermore, if you do not want to expose these engines on a public instance, you can
still add them and limit the access by setting ``tokens`` as described in the `blog post about
private engines`_.
@ -57,7 +57,7 @@ small-scale (less than 10 million documents) data collections. E.g. it is great
web pages you have visited and searching in the contents later.
The engine supports faceted search, so you can search in a subset of documents of the collection.
Futhermore, you can search in Meilisearch instances that require authentication by setting ``auth_token``.
Furthermore, you can search in Meilisearch instances that require authentication by setting ``auth_token``.
Here is a simple example to query a Meilisearch instance:

View File

@ -62,7 +62,7 @@ Before enabling MySQL engine, you must install the package ``mysql-connector-pyt
The authentication plugin is configurable by setting ``auth_plugin`` in the attributes.
By default it is set to ``caching_sha2_password``.
This is an example configuration for quering a MySQL server:
This is an example configuration for querying a MySQL server:
.. code:: yaml

View File

@ -10,7 +10,7 @@ from searx.version import VERSION_STRING
# Project --------------------------------------------------------------
project = u'searx'
copyright = u'2015-2021, Adam Tauber, Noémi Ványi'
copyright = u'2015-2022, Adam Tauber, Noémi Ványi'
author = u'Adam Tauber'
release, version = VERSION_STRING, VERSION_STRING
highlight_language = 'none'
@ -101,13 +101,11 @@ imgmath_font_size = 14
html_theme_options = {"index_sidebar_logo": True}
html_context = {"project_links": [] }
html_context["project_links"].append(ProjectLink("Blog", "blog/index.html"))
html_context["project_links"].append(ProjectLink("Blog", brand.DOCS_URL + "/blog/index.html"))
if brand.GIT_URL:
html_context["project_links"].append(ProjectLink("Source", brand.GIT_URL))
if brand.WIKI_URL:
html_context["project_links"].append(ProjectLink("Wiki", brand.WIKI_URL))
if brand.PUBLIC_INSTANCES:
html_context["project_links"].append(ProjectLink("Public instances", brand.PUBLIC_INSTANCES))
if brand.TWITTER_URL:
html_context["project_links"].append(ProjectLink("Twitter", brand.TWITTER_URL))
if brand.ISSUE_URL:

View File

@ -41,7 +41,7 @@ engine file
argument type information
======================= =========== ========================================================
categories list pages, in which the engine is working
paging boolean support multible pages
paging boolean support multiple pages
time_range_support boolean support search time range
engine_type str ``online`` by default, other possibles values are
``offline``, ``online_dictionary``, ``online_currency``
@ -159,7 +159,7 @@ parsed arguments
----------------
The function ``def request(query, params):`` always returns the ``params``
variable. Inside searx, the following paramters can be used to specify a search
variable. Inside searx, the following parameters can be used to specify a search
request:
=================== =========== ==========================================================================

View File

@ -15,7 +15,7 @@ generated and deployed at :docs:`github.io <.>`. For build prerequisites read
:ref:`docs build`.
The source files of Searx's documentation are located at :origin:`docs`. Sphinx
assumes source files to be encoded in UTF-8 by defaul. Run :ref:`make docs.live
assumes source files to be encoded in UTF-8 by default. Run :ref:`make docs.live
<make docs.live>` to build HTML while editing.
.. sidebar:: Further reading
@ -227,13 +227,13 @@ To refer anchors use the `ref role`_ markup:
.. code:: reST
Visit chapter :ref:`reST anchor`. Or set hyperlink text manualy :ref:`foo
Visit chapter :ref:`reST anchor`. Or set hyperlink text manually :ref:`foo
bar <reST anchor>`.
.. admonition:: ``:ref:`` role
:class: rst-example
Visist chapter :ref:`reST anchor`. Or set hyperlink text manualy :ref:`foo
Visist chapter :ref:`reST anchor`. Or set hyperlink text manually :ref:`foo
bar <reST anchor>`.
.. _reST ordinary ref:
@ -494,8 +494,8 @@ Figures & Images
is flexible. To get best results in the generated output format, install
ImageMagick_ and Graphviz_.
Searx's sphinx setup includes: :ref:`linuxdoc:kfigure`. Scaleable here means;
scaleable in sense of the build process. Normally in absence of a converter
Searx's sphinx setup includes: :ref:`linuxdoc:kfigure`. Scalable here means;
scalable in sense of the build process. Normally in absence of a converter
tool, the build process will break. From the authors POV its annoying to care
about the build process when handling with images, especially since he has no
access to the build process. With :ref:`linuxdoc:kfigure` the build process
@ -503,7 +503,7 @@ continues and scales output quality in dependence of installed image processors.
If you want to add an image, you should use the ``kernel-figure`` (inheritance
of :dudir:`figure`) and ``kernel-image`` (inheritance of :dudir:`image`)
directives. E.g. to insert a figure with a scaleable image format use SVG
directives. E.g. to insert a figure with a scalable image format use SVG
(:ref:`svg image example`):
.. code:: reST
@ -1185,7 +1185,7 @@ and *targets* (e.g. a ref to :ref:`row 2 of table's body <row body 2>`).
- cell 4.4
* - row 5
- cell 5.1 with automatic span to rigth end
- cell 5.1 with automatic span to right end
* - row 6
- cell 6.1
@ -1237,7 +1237,7 @@ and *targets* (e.g. a ref to :ref:`row 2 of table's body <row body 2>`).
- cell 4.4
* - row 5
- cell 5.1 with automatic span to rigth end
- cell 5.1 with automatic span to right end
* - row 6
- cell 6.1

View File

@ -8,9 +8,6 @@ Searx is a free internet metasearch engine which aggregates results from more
than 70 search services. Users are neither tracked nor profiled. Additionally,
searx can be used over Tor for online anonymity.
Get started with searx by using one of the Searx-instances_. If you don't trust
anyone, you can set up your own, see :ref:`installation`.
.. sidebar:: Features
- Self hosted
@ -33,5 +30,3 @@ anyone, you can set up your own, see :ref:`installation`.
searx_extra/index
utils/index
blog/index
.. _Searx-instances: https://searx.space

View File

@ -17,7 +17,7 @@ Prefix: ``:``
Prefix: ``?``
to add engines and categories to the currently selected categories
Abbrevations of the engines and languages are also accepted. Engine/category
Abbreviations of the engines and languages are also accepted. Engine/category
modifiers are chainable and inclusive (e.g. with :search:`!it !ddg !wp qwer
<?q=%21it%20%21ddg%20%21wp%20qwer>` search in IT category **and** duckduckgo
**and** wikipedia for ``qwer``).

9
manage
View File

@ -188,13 +188,11 @@ docker.build() {
die 1 "there is no remote origin"
fi
# This is a git repository
# "git describe" to get the Docker version (for example : v0.15.0-89-g0585788e)
# awk to remove the "v" and the "g"
SEARX_GIT_VERSION=$(git describe --match "v[0-9]*\.[0-9]*\.[0-9]*" HEAD 2>/dev/null | awk -F'-' '{OFS="-"; $1=substr($1, 2); if ($3) { $3=substr($3, 2); } print}')
SEARX_GIT_VERSION=$(git describe --tags | awk -F'-' '{OFS="-"; $1=substr($1, 2); if ($3) { $3=substr($3, 2); } print}')
# add the suffix "-dirty" if the repository has uncommited change
# add the suffix "-dirty" if the repository has uncommitted change
# /!\ HACK for searx/searx: ignore utils/brand.env
git update-index -q --refresh
if [ ! -z "$(git diff-index --name-only HEAD -- | grep -v 'utils/brand.env')" ]; then
@ -286,9 +284,6 @@ node.env() {
which npm &> /dev/null || die 1 'node.env - npm is not found!'
( set -e
# shellcheck disable=SC2030
PATH="$(npm bin):$PATH"
export PATH
build_msg INSTALL "npm install $NPM_PACKAGES"
# shellcheck disable=SC2086

View File

@ -1,19 +1,19 @@
mock==4.0.3
mock==5.0.1
nose2[coverage_plugin]==0.12.0
cov-core==1.15.0
pycodestyle==2.9.1
pylint==2.14.5
splinter==0.18.1
pycodestyle==2.10.0
pylint==2.15.9
splinter==0.19.0
transifex-client==0.14.3; python_version < '3.10'
transifex-client==0.12.5; python_version == '3.10'
selenium==4.3.0
twine==4.0.1
Pallets-Sphinx-Themes==2.0.2
selenium==4.8.3
twine==4.0.2
Pallets-Sphinx-Themes==2.0.3
docutils==0.18
Sphinx==5.1.1
Sphinx==5.3.0
sphinx-issues==3.0.1
sphinx-jinja==2.0.2
sphinx-tabs==3.4.1
sphinxcontrib-programoutput==0.17
sphinx-autobuild==2021.3.14
linuxdoc==20211220
linuxdoc==20221127

View File

@ -1,13 +1,13 @@
Brotli==1.0.9
babel==2.10.3
certifi==2022.6.15
babel==2.11.0
certifi==2022.12.7
flask-babel==2.0.0
flask==2.2.1
flask==2.2.2
jinja2==3.1.2
langdetect==1.0.9
lxml==4.9.1
lxml==4.9.2
pygments==2.12.0
python-dateutil==2.8.2
pyyaml==6.0
requests[socks]==2.28.1
setproctitle==1.3.1
requests[socks]==2.28.2
setproctitle==1.3.2

View File

@ -41,11 +41,11 @@ if settings['ui']['static_path']:
'''
enable debug if
the environnement variable SEARX_DEBUG is 1 or true
the environment variable SEARX_DEBUG is 1 or true
(whatever the value in settings.yml)
or general.debug=True in settings.yml
disable debug if
the environnement variable SEARX_DEBUG is 0 or false
the environment variable SEARX_DEBUG is 0 or false
(whatever the value in settings.yml)
or general.debug=False in settings.yml
'''

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,9 @@
{
"versions": [
"102.0",
"101.0.1",
"101.0"
"111.0.1",
"111.0",
"110.0.1",
"110.0"
],
"os": [
"Windows NT 10.0; WOW64",

View File

@ -153,7 +153,6 @@
"Q107164998": "cd mm²/m²",
"Q107210119": "g/s",
"Q107210344": "mg/s",
"Q107213614": "kJ/100g",
"Q107226391": "cm⁻¹",
"Q1072404": "K",
"Q107244316": "mm⁻¹",
@ -208,16 +207,38 @@
"Q1091257": "tex",
"Q1092296": "a",
"Q110143852": "Ω cm",
"Q110143896": "cm³/g",
"Q1104069": "$",
"Q11061003": "μm²",
"Q11061005": "nm²",
"Q110742003": "dppx",
"Q1131660": "st",
"Q1137675": "cr",
"Q114002440": "𒄀",
"Q114002534": "𒃻",
"Q114002639": "𒈨𒊑",
"Q114002796": "𒂆",
"Q114002930": "𒀺",
"Q114002955": "𒀹𒃷",
"Q114002974": "𒃷",
"Q1140444": "Zb",
"Q1140577": "Yb",
"Q114589269": "A",
"Q1152074": "Pb",
"Q1152323": "Tb",
"Q115277430": "QB",
"Q115280832": "RB",
"Q115359862": "qg",
"Q115359863": "rg",
"Q115359865": "Rg",
"Q115359866": "Qg",
"Q115359910": "Rm",
"Q115533751": "rm",
"Q115533764": "qm",
"Q115533776": "Qm",
"Q116432446": "ᵐ",
"Q116432563": "ˢ",
"Q116443090": "ʰ",
"Q1165799": "mil",
"Q11776930": "Mg",
"Q11830636": "psf",
@ -236,12 +257,14 @@
"Q12257695": "Eb/s",
"Q12257696": "EB/s",
"Q12261466": "kB/s",
"Q12263659": "mgal",
"Q12265780": "Pb/s",
"Q12265783": "PB/s",
"Q12269121": "Yb/s",
"Q12269122": "YB/s",
"Q12269308": "Zb/s",
"Q12269309": "ZB/s",
"Q1238720": "vols.",
"Q1247300": "cm H₂O",
"Q12714022": "cwt",
"Q12789864": "GeV",
@ -282,7 +305,6 @@
"Q14914907": "th",
"Q14916719": "Gpc",
"Q14923662": "Pm³",
"Q1511773": "LSd",
"Q15120301": "l atm",
"Q1542309": "xu",
"Q1545979": "ft³",
@ -304,7 +326,6 @@
"Q17255465": "v_P",
"Q173117": "R$",
"Q1741429": "kpm",
"Q174467": "Lm",
"Q174728": "cm",
"Q174789": "mm",
"Q175821": "μm",
@ -328,13 +349,11 @@
"Q182429": "m/s",
"Q1826195": "dl",
"Q18413919": "cm/s",
"Q184172": "F",
"Q185078": "a",
"Q185153": "erg",
"Q185648": "Torr",
"Q185759": "span",
"Q1872619": "zs",
"Q189097": "₧",
"Q190095": "Gy",
"Q19017495": "mm²",
"Q190951": "S$",
@ -350,6 +369,7 @@
"Q194339": "B$",
"Q1970718": "mam",
"Q1972579": "pdl",
"Q19877834": "cd-ft",
"Q199462": "LE",
"Q199471": "Afs",
"Q200323": "dm",
@ -388,7 +408,7 @@
"Q211256": "mi/h",
"Q21154419": "PD",
"Q211580": "BTU (th)",
"Q212120": "A h",
"Q212120": "Ah",
"Q213005": "G$",
"Q2140397": "in³",
"Q214377": "ell",
@ -428,7 +448,6 @@
"Q23931040": "dam²",
"Q23931103": "nmi²",
"Q240468": "syr£",
"Q2414435": "$b.",
"Q242988": "Lib$",
"Q2438073": "ag",
"Q2448803": "mV",
@ -546,7 +565,7 @@
"Q3773454": "Mpc",
"Q3815076": "Kib",
"Q3833309": "£",
"Q3858002": "mA h",
"Q3858002": "mAh",
"Q3867152": "ft/s²",
"Q389062": "Tib",
"Q3902688": "pl",
@ -607,6 +626,8 @@
"Q53393868": "GJ",
"Q53393886": "PJ",
"Q53393890": "EJ",
"Q53393893": "ZJ",
"Q53393898": "YJ",
"Q53448786": "yHz",
"Q53448790": "zHz",
"Q53448794": "fHz",
@ -620,6 +641,7 @@
"Q53448826": "hHz",
"Q53448828": "yJ",
"Q53448832": "zJ",
"Q53448835": "fJ",
"Q53448842": "pJ",
"Q53448844": "nJ",
"Q53448847": "μJ",
@ -682,6 +704,7 @@
"Q53951982": "Mt",
"Q53952048": "kt",
"Q54006645": "ZWb",
"Q54081354": "ZT",
"Q54081925": "ZSv",
"Q54082468": "ZS",
"Q54083144": "ZΩ",
@ -706,8 +729,6 @@
"Q56157046": "nmol",
"Q56157048": "pmol",
"Q56160603": "fmol",
"Q56302633": "UM",
"Q56317116": "mgal",
"Q56317622": "Q_P",
"Q56318907": "kbar",
"Q56349362": "Bs.S",
@ -1184,10 +1205,10 @@
"Q11570": "kg",
"Q11573": "m",
"Q11574": "s",
"Q11579": "K",
"Q11582": "L",
"Q12129": "pc",
"Q12438": "N",
"Q16068": "DM",
"Q1811": "AU",
"Q20764": "Ma",
"Q2101": "e",

View File

@ -142,7 +142,7 @@ def load_engine(engine_data):
engine.stats = {
'sent_search_count': 0, # sent search
'search_count': 0, # succesful search
'search_count': 0, # successful search
'result_count': 0,
'engine_time': 0,
'engine_time_count': 0,
@ -171,7 +171,7 @@ def load_engine(engine_data):
categories.setdefault(category_name, []).append(engine)
if engine.shortcut in engine_shortcuts:
logger.error('Engine config error: ambigious shortcut: {0}'.format(engine.shortcut))
logger.error('Engine config error: ambiguous shortcut: {0}'.format(engine.shortcut))
sys.exit(1)
engine_shortcuts[engine.shortcut] = engine.name

View File

@ -52,8 +52,7 @@ def request(query, params):
offset=offset)
params['url'] = base_url + search_path
params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')
params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
return params
@ -68,11 +67,13 @@ def response(resp):
for result in eval_xpath(dom, '//div[@class="sa_cc"]'):
link = eval_xpath(result, './/h3/a')[0]
url = link.attrib.get('href')
pretty_url = extract_text(eval_xpath(result, './/cite'))
title = extract_text(link)
content = extract_text(eval_xpath(result, './/p'))
# append result
results.append({'url': url,
'pretty_url': pretty_url,
'title': title,
'content': content})
@ -80,11 +81,13 @@ def response(resp):
for result in eval_xpath(dom, '//li[@class="b_algo"]'):
link = eval_xpath(result, './/h2/a')[0]
url = link.attrib.get('href')
pretty_url = extract_text(eval_xpath(result, './/cite'))
title = extract_text(link)
content = extract_text(eval_xpath(result, './/p'))
# append result
results.append({'url': url,
'pretty_url': pretty_url,
'title': title,
'content': content})

View File

@ -70,7 +70,7 @@ def request(query, params):
if params['time_range'] in time_range_dict:
params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])
# bing videos did not like "older" versions < 70.0.1 when selectin other
# bing videos did not like "older" versions < 70.0.1 when selecting other
# languages then 'en' .. very strange ?!?!
params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:73.0.1) Gecko/20100101 Firefox/73.0.1'

View File

@ -18,7 +18,7 @@ from searx.poolrequests import get
# about
about = {
"website": 'https://lite.duckduckgo.com/lite',
"website": 'https://lite.duckduckgo.com/lite/',
"wikidata_id": 'Q12805',
"official_api_documentation": 'https://duckduckgo.com/api',
"use_official_api": False,
@ -45,7 +45,7 @@ language_aliases = {
time_range_dict = {'day': 'd', 'week': 'w', 'month': 'm', 'year': 'y'}
# search-url
url = 'https://lite.duckduckgo.com/lite'
url = 'https://lite.duckduckgo.com/lite/'
url_ping = 'https://duckduckgo.com/t/sl_l'
@ -73,6 +73,9 @@ def request(query, params):
# link again and again ..
params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'
params['headers']['Origin'] = 'https://lite.duckduckgo.com'
params['headers']['Referer'] = 'https://lite.duckduckgo.com/'
params['headers']['User-Agent'] = 'Mozilla/5.0'
# initial page does not have an offset
if params['pageno'] == 2:

View File

@ -80,7 +80,7 @@ def response(resp):
# * book / performing art / film / television / media franchise / concert tour / playwright
# * prepared food
# * website / software / os / programming language / file format / software engineer
# * compagny
# * company
content = ''
heading = search_res.get('Heading', '')

View File

@ -40,7 +40,7 @@ def response(resp):
search_res = loads(resp.text)
# check if items are recieved
# check if items are received
if 'items' not in search_res:
return []

View File

@ -109,22 +109,15 @@ filter_mapping = {
# specific xpath variables
# ------------------------
# google results are grouped into <div class="jtfYYd ..." ../>
results_xpath = '//div[contains(@class, "jtfYYd")]'
results_xpath = '//div[contains(@class, "MjjYud")]'
title_xpath = './/h3[1]'
href_xpath = './/a/@href'
content_xpath = './/div[@data-sncf]'
results_xpath_mobile_ui = '//div[contains(@class, "g ")]'
# google *sections* are no usual *results*, we ignore them
g_section_with_header = './g-section-with-header'
# the title is a h3 tag relative to the result group
title_xpath = './/h3[1]'
# in the result group there is <div class="yuRUbf" ../> it's first child is a <a
# href=...>
href_xpath = './/div[@class="yuRUbf"]//a/@href'
# in the result group there is <div class="VwiC3b ..." ../> containing the *content*
content_xpath = './/div[contains(@class, "VwiC3b")]'
# Suggestions are links placed in a *card-section*, we extract only the text
# from the links not the links itself.
@ -213,7 +206,8 @@ def request(query, params):
additional_parameters = {}
if use_mobile_ui:
additional_parameters = {
'async': 'use_ac:true,_fmt:pc',
'asearch': 'arc',
'async': 'use_ac:true,_fmt:html',
}
# https://www.google.de/search?q=corona&hl=de&lr=lang_de&start=0&tbs=qdr%3Ad&safe=medium
@ -288,7 +282,7 @@ def response(resp):
# google *sections*
if extract_text(eval_xpath(result, g_section_with_header)):
logger.debug("ingoring <g-section-with-header>")
logger.debug("ignoring <g-section-with-header>")
continue
try:

View File

@ -2,7 +2,7 @@
"""Google (News)
For detailed description of the *REST-full* API see: `Query Parameter
Definitions`_. Not all parameters can be appied:
Definitions`_. Not all parameters can be applied:
- num_ : the number of search results is ignored
- save_ : is ignored / Google-News results are always *SafeSearch*
@ -155,7 +155,7 @@ def response(resp):
padding = (4 -(len(jslog) % 4)) * "="
jslog = b64decode(jslog + padding)
except binascii.Error:
# URL cant be read, skip this result
# URL can't be read, skip this result
continue
# now we have : b'[null, ... null,"https://www.cnn.com/.../index.html"]'

View File

@ -2,7 +2,7 @@
"""Google (Video)
For detailed description of the *REST-full* API see: `Query Parameter
Definitions`_. Not all parameters can be appied.
Definitions`_. Not all parameters can be applied.
.. _admonition:: Content-Security-Policy (CSP)
@ -163,7 +163,7 @@ def response(resp):
# google *sections*
if extract_text(eval_xpath(result, g_section_with_header)):
logger.debug("ingoring <g-section-with-header>")
logger.debug("ignoring <g-section-with-header>")
continue
title = extract_text(eval_xpath_getindex(result, title_xpath, 0))

View File

@ -0,0 +1,93 @@
from urllib.parse import urlencode
import json
import re
from datetime import datetime
# config
categories = ['general', 'images', 'music', 'videos']
paging = True
time_range_support = True
# url
base_url = 'https://api.ipfs-search.com/v1/'
search_string = 'search?{query} first-seen:{time_range} metadata.Content-Type:({mime_type})&page={page} '
mime_types_map = {
'general': "*",
'images': 'image*',
'music': 'audio*',
'videos': 'video*'
}
time_range_map = {'day': '[ now-24h\/h TO *]',
'week': '[ now\/h-7d TO *]',
'month': '[ now\/d-30d TO *]',
'year': '[ now\/d-1y TO *]'}
ipfs_url = 'https://gateway.ipfs.io/ipfs/{hash}'
def request(query, params):
mime_type = mime_types_map.get(params['category'], '*')
time_range = time_range_map.get(params['time_range'], '*')
search_path = search_string.format(
query=urlencode({'q': query}),
time_range=time_range,
page=params['pageno'],
mime_type=mime_type)
params['url'] = base_url + search_path
return params
def clean_html(text):
if not text:
return ""
return str(re.sub(re.compile('<.*?>'), '', text))
def create_base_result(record):
url = ipfs_url.format(hash=record.get('hash'))
title = clean_html(record.get('title'))
published_date = datetime.strptime(record.get('first-seen'), '%Y-%m-%dT%H:%M:%SZ')
return {'url': url,
'title': title,
'publishedDate': published_date}
def create_text_result(record):
result = create_base_result(record)
description = clean_html(record.get('description'))
result['description'] = description
return result
def create_image_result(record):
result = create_base_result(record)
result['img_src'] = result['url']
result['template'] = 'images.html'
return result
def create_video_result(record):
result = create_base_result(record)
result['thumbnail'] = ''
result['template'] = 'videos.html'
return result
def response(resp):
api_results = json.loads(resp.text)
results = []
for result in api_results.get('hits', []):
mime_type = result.get('mimetype', 'text/plain')
if mime_type.startswith('image'):
results.append(create_image_result(result))
elif mime_type.startswith('video'):
results.append(create_video_result(result))
else:
results.append(create_text_result(result))
return results

55
searx/engines/omnom.py Normal file
View File

@ -0,0 +1,55 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
"""
Omnom (General)
"""
from json import loads
from urllib.parse import urlencode
# about
about = {
"website": 'https://github.com/asciimoo/omnom',
"wikidata_id": None,
"official_api_documentation": 'http://your.omnom.host/api',
"use_official_api": True,
"require_api_key": False,
"results": 'JSON',
}
# engine dependent config
categories = ['general']
paging = True
# search-url
base_url = None
search_path = 'bookmarks?{query}&pageno={pageno}&format=json'
bookmark_path = 'bookmark?id='
# do search-request
def request(query, params):
params['url'] = base_url +\
search_path.format(query=urlencode({'query': query}),
pageno=params['pageno'])
return params
# get response from search-request
def response(resp):
results = []
json = loads(resp.text)
# parse results
for r in json.get('Bookmarks', {}):
content = r['url']
if r.get('notes'):
content += ' - ' + r['notes']
results.append({
'title': r['title'],
'content': content,
'url': base_url + bookmark_path + str(r['id']),
})
# return results
return results

View File

@ -72,7 +72,7 @@ def response(resp):
elif properties.get('osm_type') == 'R':
osm_type = 'relation'
else:
# continue if invalide osm-type
# continue if invalid osm-type
continue
url = result_base_url.format(osm_type=osm_type,

View File

@ -71,7 +71,7 @@ def response(resp):
if 'downloadUrl' in result:
new_result['torrentfile'] = result['downloadUrl']
# magnet link *may* be in guid, but it may be also idential to infoUrl
# magnet link *may* be in guid, but it may be also identical to infoUrl
if 'guid' in result and isinstance(result['guid'], str) and result['guid'].startswith('magnet'):
new_result['magnetlink'] = result['guid']

View File

@ -23,6 +23,7 @@ categories = ['music']
paging = True
api_client_id = None
api_client_secret = None
timeout = 10.0
# search-url
url = 'https://api.spotify.com/'
@ -40,9 +41,10 @@ def request(query, params):
r = requests.post(
'https://accounts.spotify.com/api/token',
timeout=timeout,
data={'grant_type': 'client_credentials'},
headers={'Authorization': 'Basic ' + base64.b64encode(
"{}:{}".format(api_client_id, api_client_secret).encode()
"{}:{}".format(api_client_id, api_client_secret).encode(),
).decode()}
)
j = loads(r.text)

View File

@ -51,7 +51,7 @@ search_url = base_url + 'sp/search?'
# specific xpath variables
# ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
# not ads: div[@class="result"] are the direct childs of div[@id="results"]
# not ads: div[@class="result"] are the direct children of div[@id="results"]
results_xpath = '//div[@class="w-gl__result__main"]'
link_xpath = './/a[@class="w-gl__result-title result-link"]'
content_xpath = './/p[@class="w-gl__description"]'
@ -91,15 +91,13 @@ def get_sc_code(headers):
dom = html.fromstring(resp.text)
try:
# href --> '/?sc=adrKJMgF8xwp20'
href = eval_xpath(dom, '//a[@class="footer-home__logo"]')[0].get('href')
sc_code = eval_xpath(dom, '//input[@name="sc"]')[0].get('value')
except IndexError as exc:
# suspend startpage API --> https://github.com/searxng/searxng/pull/695
raise SearxEngineResponseException(
suspended_time=7 * 24 * 3600, message="PR-695: query new sc time-stamp failed!"
) from exc
sc_code = href[5:]
sc_code_ts = time()
logger.debug("new value is: %s", sc_code)
@ -216,7 +214,7 @@ def _fetch_supported_languages(resp):
# native name, the English name of the writing script used by the language,
# or occasionally something else entirely.
# this cases are so special they need to be hardcoded, a couple of them are mispellings
# this cases are so special they need to be hardcoded, a couple of them are misspellings
language_names = {
'english_uk': 'en-GB',
'fantizhengwen': ['zh-TW', 'zh-HK'],

View File

@ -49,7 +49,7 @@ WIKIDATA_PROPERTIES = {
# SERVICE wikibase:label: https://en.wikibooks.org/wiki/SPARQL/SERVICE_-_Label#Manual_Label_SERVICE
# https://en.wikibooks.org/wiki/SPARQL/WIKIDATA_Precision,_Units_and_Coordinates
# https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#Data_model
# optmization:
# optimization:
# * https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/query_optimization
# * https://github.com/blazegraph/database/wiki/QueryHints
QUERY_TEMPLATE = """
@ -335,7 +335,7 @@ def get_attributes(language):
add_amount('P2046') # area
add_amount('P281') # postal code
add_label('P38') # currency
add_amount('P2048') # heigth (building)
add_amount('P2048') # height (building)
# Media
for p in ['P400', # platform (videogames, computing)

View File

@ -50,7 +50,7 @@ def request(query, params):
# replace private user area characters to make text legible
def replace_pua_chars(text):
pua_chars = {'\uf522': '\u2192', # rigth arrow
pua_chars = {'\uf522': '\u2192', # right arrow
'\uf7b1': '\u2115', # set of natural numbers
'\uf7b4': '\u211a', # set of rational numbers
'\uf7b5': '\u211d', # set of real numbers

View File

@ -35,7 +35,7 @@ time_range_support = False
time_range_url = '&hours={time_range_val}'
'''Time range URL parameter in the in :py:obj:`search_url`. If no time range is
requested by the user, the URL paramter is an empty string. The
requested by the user, the URL parameter is an empty string. The
``{time_range_val}`` replacement is taken from the :py:obj:`time_range_map`.
.. code:: yaml

View File

@ -30,7 +30,7 @@ def get_external_url(url_id, item_id, alternative="default"):
"""Return an external URL or None if url_id is not found.
url_id can take value from data/external_urls.json
The "imdb_id" value is automaticaly converted according to the item_id value.
The "imdb_id" value is automatically converted according to the item_id value.
If item_id is None, the raw URL with the $1 is returned.
"""

View File

@ -78,7 +78,7 @@ def load_single_https_ruleset(rules_path):
rules = []
exclusions = []
# parse childs from ruleset
# parse children from ruleset
for ruleset in root:
# this child define a target
if ruleset.tag == 'target':

View File

@ -2435,7 +2435,7 @@
<rule from="^http://widgets\.yahoo\.com/[^?]*"
to="https://www.yahoo.com/" />
<rule from="^http://((?:\w\w|fr-ca\.actualites|address|\w\w\.address|admanager|(?:\w\w|global)\.adserver|adspecs|\w+\.adspecs|\w+\.adspecs-new|advertising|\w\w\.advertising|beap\.adx|c5a?\.ah|(?:s-)?cookex\.amp|(?:[aosz]|apac|y3?)\.analytics|anc|answers|(?:\w\w|espanol|malaysia)\.answers|antispam|\w\w\.antispam|vn\.antoan|au\.apps|global\.ard|astrology|\w\w\.astrology|hk\.(?:(?:info|f1\.master|f1\.page|search|store|edit\.store|user)\.)?auctions|autos|\w\w\.autos|ar\.ayuda|(?:clicks\.beap|csc\.beap|pn1|row|us)\.bc|tw\.bid|tw\.(?:campaign|master|mb|page|search|store|user)\.bid|(?:m\.)?tw\.bigdeals|tw\.billing|biz|boss|(?:tw\.partner|tw)\.buy|(?:\w\w\.)?calendar|careers|\w\w\.cars|(?:\w\w|es-us)\.celebridades|(?:\w\w\.)?celebrity|tw\.charity|i?chart|(?:\w\w|es-us)\.cine|\w\w\.cinema|(?:\w\w|es-us)\.clima|migration\.cn|(?:deveopers\.)?commercecentral|br\.contribuidores|(?:uk\.)?contributor|au\.dating|(?:\w\w|es-us)\.deportes|developer|tw\.dictionary|dir|downloads|s-b\.dp|(?:eu\.|na\.|sa\.|tw\.)?edit|tw\.(?:ysm\.)?emarketing|en-maktoob|\w\w\.entertainment|espanol|edit\.europe|eurosport|(?:de|es|it|uk)\.eurosport|everything|\w\w\.everything|\w+\.fantasysports|au\.fango|tw\.fashion|br\.financas|finance|(?:\w\w|tw\.chart|espanol|tw\.futures|streamerapi)\.finance|(?:\w\w|es-us)\.finanzas|nz\.rss\.food|nz\.forums|games|(?:au|ca|uk)\.games|geo|gma|groups|(?:\w\w|asia|espanol|es-us|fr-ca|moderators)\.groups|health|help|(?:\w\w|secure)\.help|homes|(?:tw|tw\.v2)\.house|info|\w\w\.info|tw\.tool\.ks|au\.launch|legalredirect|(?:\w\w)\.lifestyle|(?:gh\.bouncer\.)?login|us\.l?rd|local|\w\w\.local|m|r\.m|\w\w\.m|mail|(?:\w\w\.overview|[\w-]+(?:\.c\.yom)?)\.mail|maktoob|malaysia|tw\.(?:user\.)?mall|maps|(?:\w\w|espanol|sgws2)\.maps|messenger|(?:\w\w|malaysia)\.messenger|\w\w\.meteo|mlogin|mobile|(?:\w\w|espanol|malaysia)\.mobile|tw\.(?:campaign\.)?money|tw\.movie|movies|(?:au|ca|nz|au\.rss|nz\.rss|tw|uk)\.movies|[\w.-]+\.msg|(?:\w\w|es-us)\.mujer|music|ca\.music|[\w-]+\.musica|my|us\.my|de\.nachrichten|ucs\.netsvs|news|(?:au|ca|fr|gr|hk|in|nz|ph|nz\.rss|sg|tw|uk)\.news|cookiex\.ngd|(?:\w\w|es-us)\.noticias|omg|(?:\w\w|es-us)\.omg|au\.oztips|rtb\.pclick|pilotx1|pipes|play|playerio|privacy|profile|tw\.promo|(?:au|hk|nz)\.promotions|publishing|(?:analytics|mailapps|media|ucs|us-locdrop|video)\.query|hk\.rd|(?:\w\w\.|fr-ca\.)?safely|screen|(?:\w\w|es-us)\.screen|scribe|search|(?:\w\w|w\w\.blog|\w\w\.dictionary|finance|\w\w\.finance|images|\w\w\.images|\w\w\.knowledge|\w\w\.lifestyle|\w\w\.local|malaysia|movies|\w\w\.movies|news|\w\w\.news|malaysia\.news|r|recipes|\w\w\.recipes|shine|shopping|\w\w\.shopping|sports|\w\w\.sports|tools|au\.tv|video|\w\w\.video|malaysia\.video)\.search|sec|rtb\.pclick\.secure|security|tw\.security|\w\w\.seguranca|\w\w\.seguridad|es-us\.seguridad|\w\w\.seguro|tw\.serviceplus|settings|shine|ca\.shine|shopping|ca\.shopping|\w+\.sitios|dashboard\.slingstone|(?:au\.|order\.)?smallbusiness|smarttv|rd\.software|de\.spiele|sports|(?:au|ca|fr|hk|nz|ph|profiles|au\.rss|nz\.rss|tw)\.sports|tw\.stock|au\.thehype|\w\w\.tiempo|es\.todo|toolbar|(?:\w\w|data|malaysia)\.toolbar|(?:au|nz)\.totaltravel|transparency|travel|tw\.travel||tv|(?:ar|au|de|fr|es|es-us|it|mx|nz|au\.rss|uk)\.tv|tw\.uwant|(?:mh|nz|qos|yep)\.video|weather|(?:au|ca|hk|in|nz|sg|ph|uk|us)\.weather|de\.wetter|www|au\.yel|video\.media\.yql|dmros\.ysm)\.)?yahoo\.com/"
<rule from="^http://((?:\w\w|fr-ca\.actualites|address|\w\w\.address|admanager|(?:\w\w|global)\.adserver|adspecs|\w+\.adspecs|\w+\.adspecs-new|advertising|\w\w\.advertising|beap\.adx|c5a?\.ah|(?:s-)?cookex\.amp|(?:[aosz]|apac|y3?)\.analytics|anc|answers|(?:\w\w|espanol|malaysia)\.answers|antispam|\w\w\.antispam|vn\.antoan|au\.apps|global\.ard|astrology|\w\w\.astrology|hk\.(?:(?:info|f1\.master|f1\.page|search|store|edit\.store|user)\.)?auctions|autos|\w\w\.autos|ar\.ayuda|(?:clicks\.beap|csc\.beap|pn1|row|us)\.bc|tw\.bid|tw\.(?:campaign|master|mb|page|search|store|user)\.bid|(?:m\.)?tw\.bigdeals|tw\.billing|biz|boss|(?:tw\.partner|tw)\.buy|(?:\w\w\.)?calendar|careers|\w\w\.cars|(?:\w\w|es-us)\.celebridades|(?:\w\w\.)?celebrity|tw\.charity|i?chart|(?:\w\w|es-us)\.cine|\w\w\.cinema|(?:\w\w|es-us)\.clima|migration\.cn|(?:developers\.)?commercecentral|br\.contribuidores|(?:uk\.)?contributor|au\.dating|(?:\w\w|es-us)\.deportes|developer|tw\.dictionary|dir|downloads|s-b\.dp|(?:eu\.|na\.|sa\.|tw\.)?edit|tw\.(?:ysm\.)?emarketing|en-maktoob|\w\w\.entertainment|espanol|edit\.europe|eurosport|(?:de|es|it|uk)\.eurosport|everything|\w\w\.everything|\w+\.fantasysports|au\.fango|tw\.fashion|br\.financas|finance|(?:\w\w|tw\.chart|espanol|tw\.futures|streamerapi)\.finance|(?:\w\w|es-us)\.finanzas|nz\.rss\.food|nz\.forums|games|(?:au|ca|uk)\.games|geo|gma|groups|(?:\w\w|asia|espanol|es-us|fr-ca|moderators)\.groups|health|help|(?:\w\w|secure)\.help|homes|(?:tw|tw\.v2)\.house|info|\w\w\.info|tw\.tool\.ks|au\.launch|legalredirect|(?:\w\w)\.lifestyle|(?:gh\.bouncer\.)?login|us\.l?rd|local|\w\w\.local|m|r\.m|\w\w\.m|mail|(?:\w\w\.overview|[\w-]+(?:\.c\.yom)?)\.mail|maktoob|malaysia|tw\.(?:user\.)?mall|maps|(?:\w\w|espanol|sgws2)\.maps|messenger|(?:\w\w|malaysia)\.messenger|\w\w\.meteo|mlogin|mobile|(?:\w\w|espanol|malaysia)\.mobile|tw\.(?:campaign\.)?money|tw\.movie|movies|(?:au|ca|nz|au\.rss|nz\.rss|tw|uk)\.movies|[\w.-]+\.msg|(?:\w\w|es-us)\.mujer|music|ca\.music|[\w-]+\.musica|my|us\.my|de\.nachrichten|ucs\.netsvs|news|(?:au|ca|fr|gr|hk|in|nz|ph|nz\.rss|sg|tw|uk)\.news|cookiex\.ngd|(?:\w\w|es-us)\.noticias|omg|(?:\w\w|es-us)\.omg|au\.oztips|rtb\.pclick|pilotx1|pipes|play|playerio|privacy|profile|tw\.promo|(?:au|hk|nz)\.promotions|publishing|(?:analytics|mailapps|media|ucs|us-locdrop|video)\.query|hk\.rd|(?:\w\w\.|fr-ca\.)?safely|screen|(?:\w\w|es-us)\.screen|scribe|search|(?:\w\w|w\w\.blog|\w\w\.dictionary|finance|\w\w\.finance|images|\w\w\.images|\w\w\.knowledge|\w\w\.lifestyle|\w\w\.local|malaysia|movies|\w\w\.movies|news|\w\w\.news|malaysia\.news|r|recipes|\w\w\.recipes|shine|shopping|\w\w\.shopping|sports|\w\w\.sports|tools|au\.tv|video|\w\w\.video|malaysia\.video)\.search|sec|rtb\.pclick\.secure|security|tw\.security|\w\w\.seguranca|\w\w\.seguridad|es-us\.seguridad|\w\w\.seguro|tw\.serviceplus|settings|shine|ca\.shine|shopping|ca\.shopping|\w+\.sitios|dashboard\.slingstone|(?:au\.|order\.)?smallbusiness|smarttv|rd\.software|de\.spiele|sports|(?:au|ca|fr|hk|nz|ph|profiles|au\.rss|nz\.rss|tw)\.sports|tw\.stock|au\.thehype|\w\w\.tiempo|es\.todo|toolbar|(?:\w\w|data|malaysia)\.toolbar|(?:au|nz)\.totaltravel|transparency|travel|tw\.travel||tv|(?:ar|au|de|fr|es|es-us|it|mx|nz|au\.rss|uk)\.tv|tw\.uwant|(?:mh|nz|qos|yep)\.video|weather|(?:au|ca|hk|in|nz|sg|ph|uk|us)\.weather|de\.wetter|www|au\.yel|video\.media\.yql|dmros\.ysm)\.)?yahoo\.com/"
to="https://$1yahoo.com/" />
<rule from="^http://([\w-]+)\.yahoofs\.com/"

View File

@ -11,7 +11,11 @@ default_on = False
def on_result(request, search, result):
q = search.search_query.query
qs = shlex.split(q)
# WARN: shlex.quote is designed only for Unix shells and may be vulnerable
# to command injection on non-POSIX compliant shells (Windows)
# https://docs.python.org/3/library/shlex.html#shlex.quote
squote = shlex.quote(q)
qs = shlex.split(squote)
spitems = [x.lower() for x in qs if ' ' in x]
mitems = [x.lower() for x in qs if x.startswith('-')]
siteitems = [x.lower() for x in qs if x.startswith('site:')]

View File

@ -97,7 +97,7 @@ class SessionSinglePool(requests.Session):
self.mount('http://', http_adapter)
def close(self):
"""Call super, but clear adapters since there are managed globaly"""
"""Call super, but clear adapters since there are managed globally"""
self.adapters.clear()
super().close()

View File

@ -62,7 +62,7 @@ class Setting:
return self.value
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP reponse obect
"""Save cookie ``name`` in the HTTP response object
If needed, its overwritten in the inheritance."""
resp.set_cookie(name, self.value, max_age=COOKIE_MAX_AGE)
@ -125,7 +125,7 @@ class MultipleChoiceSetting(EnumStringSetting):
self.value.append(choice)
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP reponse obect
"""Save cookie ``name`` in the HTTP response object
"""
resp.set_cookie(name, ','.join(self.value), max_age=COOKIE_MAX_AGE)
@ -160,7 +160,7 @@ class SetSetting(Setting):
self.values = set(elements) # pylint: disable=attribute-defined-outside-init
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP reponse obect
"""Save cookie ``name`` in the HTTP response object
"""
resp.set_cookie(name, ','.join(self.values), max_age=COOKIE_MAX_AGE)
@ -209,7 +209,7 @@ class MapSetting(Setting):
self.key = data # pylint: disable=attribute-defined-outside-init
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP reponse obect
"""Save cookie ``name`` in the HTTP response object
"""
if hasattr(self, 'key'):
resp.set_cookie(name, self.key, max_age=COOKIE_MAX_AGE)
@ -253,7 +253,7 @@ class SwitchableSetting(Setting):
self.enabled.add(choice['id'])
def save(self, resp): # pylint: disable=arguments-differ
"""Save cookie in the HTTP reponse obect
"""Save cookie in the HTTP response object
"""
resp.set_cookie('disabled_{0}'.format(self.value), ','.join(self.disabled), max_age=COOKIE_MAX_AGE)
resp.set_cookie('enabled_{0}'.format(self.value), ','.join(self.enabled), max_age=COOKIE_MAX_AGE)
@ -517,7 +517,7 @@ class Preferences:
return ret_val
def save(self, resp):
"""Save cookie in the HTTP reponse obect
"""Save cookie in the HTTP response object
"""
for user_setting_name, user_setting in self.key_value_settings.items():
if user_setting.locked:

View File

@ -197,10 +197,10 @@ class BangParser(QueryPartParser):
self.raw_text_query.enginerefs.append(EngineRef(value, 'none'))
return True
# check if prefix is equal with categorie name
# check if prefix is equal with category name
if value in categories:
# using all engines for that search, which
# are declared under that categorie name
# are declared under that category name
self.raw_text_query.enginerefs.extend(EngineRef(engine.name, value)
for engine in categories[value]
if (engine.name, value) not in self.raw_text_query.disabled_engines)
@ -216,7 +216,7 @@ class BangParser(QueryPartParser):
self._add_autocomplete(first_char + suggestion)
return
# check if query starts with categorie name
# check if query starts with category name
for category in categories:
if category.startswith(value):
self._add_autocomplete(first_char + category)
@ -309,7 +309,7 @@ class RawTextQuery:
def getFullQuery(self):
"""
get full querry including whitespaces
get full query including whitespaces
"""
return '{0} {1}'.format(' '.join(self.query_parts), self.getQuery()).strip()

View File

@ -143,9 +143,9 @@ def result_score(result, language):
if language in domain_parts:
weight *= 1.1
occurences = len(result['positions'])
occurrences = len(result['positions'])
return sum((occurences * weight) / position for position in result['positions'])
return sum((occurrences * weight) / position for position in result['positions'])
class ResultContainer:
@ -252,7 +252,7 @@ class ResultContainer:
result['engines'] = set([result['engine']])
# strip multiple spaces and cariage returns from content
# strip multiple spaces and carriage returns from content
if result.get('content'):
result['content'] = WHITESPACE_REGEX.sub(' ', result['content'])
@ -278,7 +278,7 @@ class ResultContainer:
return merged_result
else:
# it's an image
# it's a duplicate if the parsed_url, template and img_src are differents
# it's a duplicate if the parsed_url, template and img_src are different
if result.get('img_src', '') == merged_result.get('img_src', ''):
return merged_result
return None

View File

@ -60,7 +60,7 @@ def run(engine_name_list, verbose):
stderr.write(f'{BOLD_SEQ}Engine {name:30}{RESET_SEQ}Checking\n')
checker = searx.search.checker.Checker(processor)
checker.run()
if checker.test_results.succesfull:
if checker.test_results.successful:
stdout.write(f'{BOLD_SEQ}Engine {name:30}{RESET_SEQ}{GREEN}OK{RESET_SEQ}\n')
if verbose:
stdout.write(f' {"found languages":15}: {" ".join(sorted(list(checker.test_results.languages)))}\n')

View File

@ -59,7 +59,7 @@ def run():
logger.debug('Checking %s engine', name)
checker = Checker(processor)
checker.run()
if checker.test_results.succesfull:
if checker.test_results.successful:
result['engines'][name] = {'success': True}
else:
result['engines'][name] = {'success': False, 'errors': checker.test_results.errors}

View File

@ -146,7 +146,7 @@ class TestResults:
self.languages.add(language)
@property
def succesfull(self):
def successful(self):
return len(self.errors) == 0
def __iter__(self):
@ -291,7 +291,7 @@ class ResultContainerTests:
self._record_error('No result')
def one_title_contains(self, title: str):
"""Check one of the title contains `title` (case insensitive comparaison)"""
"""Check one of the title contains `title` (case insensitive comparison)"""
title = title.lower()
for result in self.result_container.get_ordered_results():
if title in result['title'].lower():

View File

@ -56,7 +56,7 @@ class OnlineProcessor(EngineProcessor):
def _send_http_request(self, params):
# create dictionary which contain all
# informations about the request
# information about the request
request_args = dict(
headers=params['headers'],
cookies=params['cookies'],

View File

@ -19,7 +19,7 @@ search:
default_lang : "" # Default search language - leave blank to detect from browser information or use codes from 'languages.py'
ban_time_on_fail : 5 # ban time in seconds after engine errors
max_ban_time_on_fail : 120 # max ban time in seconds after engine errors
prefer_configured_language: False # increase weight of results in confiugred language in ranking
prefer_configured_language: False # increase weight of results in configured language in ranking
server:
port : 8888
@ -73,7 +73,7 @@ ui:
outgoing: # communication with search engines
request_timeout : 2.0 # default timeout in seconds, can be override by engine
# max_request_timeout: 10.0 # the maximum timeout in seconds
useragent_suffix : "" # suffix of searx_useragent, could contain informations like an email address to the administrator
useragent_suffix : "" # suffix of searx_useragent, could contain information like an email address to the administrator
pool_connections : 100 # Number of different hosts
pool_maxsize : 10 # Number of simultaneous requests by host
# uncomment below section if you want to use a proxy
@ -740,6 +740,13 @@ engines:
shortcut: iv
timeout : 5.0
disabled : True
- name: ipfs search
engine: ipfs_search
shortcut: ipfs
paging: True
timeout: 5.0
disabled: True
- name: kickass
engine : kickass
@ -1284,7 +1291,7 @@ engines:
- name : wiby
engine : json_engine
paging : True
search_url : https://wiby.me/json/?q={query}&o={pageno}0
search_url : https://wiby.me/json/?q={query}&p={pageno}
url_query : URL
title_query : Title
content_query : Snippet
@ -1654,7 +1661,7 @@ engines:
paging: true
first_page_num: 0
search_url: https://search.brave.com/search?q={query}&offset={pageno}&spellcheck=1
url_xpath: //div[@class="snippet fdb"]/a/@href
url_xpath: //a[@class="result-header"]/@href
title_xpath: //span[@class="snippet-title"]
content_xpath: //p[1][@class="snippet-description"]
suggestion_xpath: //div[@class="text-gray h6"]/a
@ -1677,6 +1684,15 @@ engines:
require_api_key: false
results: HTML
# omnom engine - see https://github.com/asciimoo/omnom for more details
# - name : omnom
# engine : omnom
# paging : True
# base_url : 'http://your.omnom.host/'
# enable_http : True
# categories : general
# shortcut : om
# Doku engine lets you access to any Doku wiki instance:
# A public one or a private/corporate one.
# - name : ubuntuwiki

View File

@ -37,7 +37,7 @@ def get_user_settings_path():
# find location of settings.yml
if 'SEARX_SETTINGS_PATH' in environ:
# if possible set path to settings using the
# enviroment variable SEARX_SETTINGS_PATH
# environment variable SEARX_SETTINGS_PATH
return check_settings_yml(environ['SEARX_SETTINGS_PATH'])
# if not, get it from /etc/searx, or last resort the codebase
@ -132,7 +132,7 @@ def load_settings(load_user_setttings=True):
default_settings = load_yaml(default_settings_path)
update_settings(default_settings, user_settings)
return (default_settings,
'merge the default settings ( {} ) and the user setttings ( {} )'
'merge the default settings ( {} ) and the user settings ( {} )'
.format(default_settings_path, user_settings_path))
# the user settings, fully replace the default configuration

View File

@ -96,7 +96,7 @@
{% if 'method' not in locked_preferences %}
{% set method_label = _('Method') %}
{% set method_info = _('Change how forms are submited, <a href="http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods" rel="external">learn more about request methods</a>') %}
{% set method_info = _('Change how forms are submitted, <a href="http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods" rel="external">learn more about request methods</a>') %}
{{ preferences_item_header(method_info, method_label, rtl, 'method') }}
<select class="form-control {{ custom_select_class(rtl) }}" name="method" id="method">
<option value="POST" {% if method == 'POST' %}selected="selected"{% endif %}>POST</option>

View File

@ -114,6 +114,6 @@ if __name__ == '__main__':
run_robot_tests([getattr(robot, x) for x in dir(robot) if x.startswith('test_')])
except Exception: # pylint: disable=broad-except
errors = True
print('Error occured: {0}'.format(traceback.format_exc()))
print('Error occurred: {0}'.format(traceback.format_exc()))
test_layer.tearDown()
sys.exit(1 if errors else 0)

View File

@ -10,6 +10,7 @@
# Gabriel Nunes <gabriel.hkr@gmail.com>, 2017
# Guimarães Mello <matheus.mello@disroot.org>, 2017
# Neton Brício <fervelinux@gmail.com>, 2015
# Noémi Ványi <sitbackandwait@gmail.com>, 2022
# pizzaiolo, 2016
# shizuka, 2018
msgid ""
@ -18,7 +19,7 @@ msgstr ""
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2020-07-09 15:07+0200\n"
"PO-Revision-Date: 2014-01-30 14:32+0000\n"
"Last-Translator: André Marcelo Alvarenga <alvarenga@kde.org>, 2022\n"
"Last-Translator: Noémi Ványi <sitbackandwait@gmail.com>, 2022\n"
"Language-Team: Portuguese (Brazil) (http://www.transifex.com/asciimoo/searx/language/pt_BR/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@ -81,7 +82,7 @@ msgstr "erro de busca"
#: searx/webapp.py:634
msgid "{minutes} minute(s) ago"
msgstr "{minutos} minuto(s) atrás"
msgstr "{minutes} minuto(s) atrás"
#: searx/webapp.py:636
msgid "{hours} hour(s), {minutes} minute(s) ago"

View File

@ -53,7 +53,7 @@ def parse_lang(preferences: Preferences, form: Dict[str, str], raw_text_query: R
return preferences.get_value('language')
# get language
# set specific language if set on request, query or preferences
# TODO support search with multible languages
# TODO support search with multiple languages
if len(raw_text_query.languages):
query_lang = raw_text_query.languages[-1]
elif 'language' in form:
@ -216,7 +216,7 @@ def get_search_query_from_webapp(preferences: Preferences, form: Dict[str, str])
disabled_engines = preferences.engines.get_disabled()
# parse query, if tags are set, which change
# the serch engine or search-language
# the search engine or search-language
raw_text_query = RawTextQuery(form['q'], disabled_engines)
# set query
@ -231,7 +231,7 @@ def get_search_query_from_webapp(preferences: Preferences, form: Dict[str, str])
if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:
# if engines are calculated from query,
# set categories by using that informations
# set categories by using that information
query_engineref_list = raw_text_query.enginerefs
else:
# otherwise, using defined categories to

View File

@ -225,7 +225,7 @@ def code_highlighter(codelines, language=None):
language = 'text'
try:
# find lexer by programing language
# find lexer by programming language
lexer = get_lexer_by_name(language, stripall=True)
except:
# if lexer is not found, using default one
@ -647,7 +647,7 @@ def search():
# removing html content and whitespace duplications
result['title'] = ' '.join(html_to_text(result['title']).strip().split())
if 'url' in result:
if 'url' in result and 'pretty_url' not in result:
result['pretty_url'] = prettify_url(result['url'])
# TODO, check if timezone is calculated right

View File

@ -35,7 +35,7 @@ class UnicodeWriter:
# Fetch UTF-8 output from the queue ...
data = self.queue.getvalue()
data = data.strip('\x00')
# ... and reencode it into the target encoding
# ... and re-encode it into the target encoding
data = self.encoder.encode(data)
# write to the target stream
self.stream.write(data.decode())

View File

@ -13,7 +13,7 @@ from searx.engines.wikidata import send_wikidata_query
# ORDER BY (with all the query fields) is important to keep a deterministic result order
# so multiple invokation of this script doesn't change currencies.json
# so multiple invocation of this script doesn't change currencies.json
SARQL_REQUEST = """
SELECT DISTINCT ?iso4217 ?unit ?unicode ?label ?alias WHERE {
?item wdt:P498 ?iso4217; rdfs:label ?label.
@ -29,7 +29,7 @@ ORDER BY ?iso4217 ?unit ?unicode ?label ?alias
"""
# ORDER BY (with all the query fields) is important to keep a deterministic result order
# so multiple invokation of this script doesn't change currencies.json
# so multiple invocation of this script doesn't change currencies.json
SPARQL_WIKIPEDIA_NAMES_REQUEST = """
SELECT DISTINCT ?iso4217 ?article_name WHERE {
?item wdt:P498 ?iso4217 .

View File

@ -30,7 +30,7 @@ HTTP_COLON = 'http:'
def get_bang_url():
response = requests.get(URL_BV1)
response = requests.get(URL_BV1, timeout=10.0)
response.raise_for_status()
r = RE_BANG_VERSION.findall(response.text)
@ -38,7 +38,7 @@ def get_bang_url():
def fetch_ddg_bangs(url):
response = requests.get(url)
response = requests.get(url, timeout=10.0)
response.raise_for_status()
return json.loads(response.content.decode())

View File

@ -5,7 +5,7 @@ import requests
import re
from os.path import dirname, join
from urllib.parse import urlparse, urljoin
from distutils.version import LooseVersion, StrictVersion
from packaging.version import Version, parse
from lxml import html
from searx import searx_dir
@ -39,7 +39,7 @@ def fetch_firefox_versions():
if path.startswith(RELEASE_PATH):
version = path[len(RELEASE_PATH):-1]
if NORMAL_REGEX.match(version):
versions.append(LooseVersion(version))
versions.append(Version(version))
list.sort(versions, reverse=True)
return versions
@ -49,12 +49,12 @@ def fetch_firefox_last_versions():
versions = fetch_firefox_versions()
result = []
major_last = versions[0].version[0]
major_last = versions[0].major
major_list = (major_last, major_last - 1)
for version in versions:
major_current = version.version[0]
major_current = version.major
if major_current in major_list:
result.append(version.vstring)
result.append(str(version))
return result

View File

@ -18,7 +18,7 @@ engines_languages_file = Path(searx_dir) / 'data' / 'engines_languages.json'
languages_file = Path(searx_dir) / 'languages.py'
# Fetchs supported languages for each engine and writes json file with those.
# Fetches supported languages for each engine and writes json file with those.
def fetch_supported_languages():
engines_languages = dict()

View File

@ -451,17 +451,17 @@ install_template() {
fi
if [[ -f "${dst}" ]] && cmp --silent "${template_file}" "${dst}" ; then
info_msg "file ${dst} allready installed"
info_msg "file ${dst} already installed"
return 0
fi
info_msg "diffrent file ${dst} allready exists on this host"
info_msg "different file ${dst} already exists on this host"
while true; do
choose_one _reply "choose next step with file $dst" \
"replace file" \
"leave file unchanged" \
"interactiv shell" \
"interactive shell" \
"diff files"
case $_reply in
@ -474,7 +474,7 @@ install_template() {
"leave file unchanged")
break
;;
"interactiv shell")
"interactive shell")
echo -e "// edit ${_Red}${dst}${_creset} to your needs"
echo -e "// exit with [${_BCyan}CTRL-D${_creset}]"
sudo -H -u "${owner}" -i
@ -1018,8 +1018,8 @@ nginx_install_app() {
nginx_include_apps_enabled() {
# Add the *NGINX_APPS_ENABLED* infrastruture to a nginx server block. Such
# infrastruture is already known from fedora and centos, including apps (location
# Add the *NGINX_APPS_ENABLED* infrastructure to a nginx server block. Such
# infrastructure is already known from fedora and centos, including apps (location
# directives) from the /etc/nginx/default.d folder into the *default* nginx
# server.
@ -1521,7 +1521,7 @@ _apt_pkg_info_is_updated=0
pkg_install() {
# usage: TITEL='install foobar' pkg_install foopkg barpkg
# usage: TITLE='install foobar' pkg_install foopkg barpkg
rst_title "${TITLE:-installation of packages}" section
echo -e "\npackage(s)::\n"
@ -1557,7 +1557,7 @@ pkg_install() {
pkg_remove() {
# usage: TITEL='remove foobar' pkg_remove foopkg barpkg
# usage: TITLE='remove foobar' pkg_remove foopkg barpkg
rst_title "${TITLE:-remove packages}" section
echo -e "\npackage(s)::\n"
@ -1623,7 +1623,7 @@ git_clone() {
# git_clone <url> <path> [<branch> [<user>]]
#
# First form uses $CACHE/<name> as destination folder, second form clones
# into <path>. If repository is allready cloned, pull from <branch> and
# into <path>. If repository is already cloned, pull from <branch> and
# update working tree (if needed, the caller has to stash local changes).
#
# git clone https://github.com/searx/searx searx-src origin/master searxlogin
@ -1696,7 +1696,7 @@ lxc_init_container_env() {
# usage: lxc_init_container_env <name>
# Create a /.lxcenv file in the root folder. Call this once after the
# container is inital started and before installing any boilerplate stuff.
# container is initial started and before installing any boilerplate stuff.
info_msg "create /.lxcenv in container $1"
cat <<EOF | lxc exec "${1}" -- bash | prefix_stdout "[${_BBlue}${1}${_creset}] "

View File

@ -107,7 +107,7 @@ show
:suite: show services of all (or <name>) containers from the LXC suite
:images: show information of local images
cmd
use single qoutes to evaluate in container's bash, e.g.: 'echo \$(hostname)'
use single quotes to evaluate in container's bash, e.g.: 'echo \$(hostname)'
-- run command '...' in all containers of the LXC suite
:<name>: run command '...' in container <name>
install
@ -178,7 +178,7 @@ main() {
lxc_delete_container "$2"
fi
;;
*) usage "uknown or missing container <name> $2"; exit 42;;
*) usage "unknown or missing container <name> $2"; exit 42;;
esac
;;
start|stop)
@ -190,7 +190,7 @@ main() {
info_msg "lxc $1 $2"
lxc "$1" "$2" | prefix_stdout "[${_BBlue}${i}${_creset}] "
;;
*) usage "uknown or missing container <name> $2"; exit 42;;
*) usage "unknown or missing container <name> $2"; exit 42;;
esac
;;
show)

View File

@ -402,7 +402,7 @@ EOF
}
enable_debug() {
warn_msg "Do not enable debug in production enviroments!!"
warn_msg "Do not enable debug in production environments!!"
info_msg "Enabling debug option needs to reinstall systemd service!"
set_service_env_debug true
}

View File

@ -436,7 +436,7 @@ install_settings() {
choose_one action "What should happen to the settings file? " \
"keep configuration unchanged" \
"use origin settings" \
"start interactiv shell"
"start interactive shell"
case $action in
"keep configuration unchanged")
info_msg "leave settings file unchanged"
@ -446,7 +446,7 @@ install_settings() {
info_msg "install origin settings"
cp "${SEARX_SETTINGS_TEMPLATE}" "${SEARX_SETTINGS_PATH}"
;;
"start interactiv shell")
"start interactive shell")
backup_file "${SEARX_SETTINGS_PATH}"
echo -e "// exit with [${_BCyan}CTRL-D${_creset}]"
sudo -H -i
@ -533,7 +533,7 @@ EOF
}
test_local_searx() {
rst_title "Testing searx instance localy" section
rst_title "Testing searx instance locally" section
echo
if service_is_available "http://${SEARX_INTERNAL_HTTP}" &>/dev/null; then
@ -600,7 +600,7 @@ EOF
}
enable_debug() {
warn_msg "Do not enable debug in production enviroments!!"
warn_msg "Do not enable debug in production environments!!"
info_msg "try to enable debug mode ..."
tee_stderr 0.1 <<EOF | sudo -H -i 2>&1 | prefix_stdout "$_service_prefix"
cd ${SEARX_SRC}
@ -833,8 +833,8 @@ rst-doc() {
eval "echo \"$(< "${REPO_ROOT}/docs/build-templates/searx.rst")\""
# I use ubuntu-20.04 here to demonstrate that versions are also suported,
# normaly debian-* and ubuntu-* are most the same.
# I use ubuntu-20.04 here to demonstrate that versions are also supported,
# normally debian-* and ubuntu-* are most the same.
for DIST_NAME in ubuntu-20.04 arch fedora; do
(

View File

@ -27,7 +27,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpretor config
# Plugin to use and interpreter config
single-interpreter = true
# enable master process

View File

@ -27,7 +27,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpretor config
# Plugin to use and interpreter config
single-interpreter = true
# enable master process

View File

@ -26,7 +26,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpretor config
# Plugin to use and interpreter config
single-interpreter = true
# enable master process

View File

@ -26,7 +26,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpretor config
# Plugin to use and interpreter config
single-interpreter = true
# enable master process