Compare commits

..

No commits in common. "master" and "v1.1.0" have entirely different histories.

86 changed files with 29148 additions and 20155 deletions

View File

@ -424,7 +424,7 @@ Special thanks to `NLNet <https://nlnet.nl>`__ for sponsoring multiple features
- Removed engines: faroo
Special thanks to `NLNet <https://nlnet.nl>`__ for sponsoring multiple features of this release.
Special thanks to https://www.accessibility.nl/english for making accessibility audit.
Special thanks to https://www.accessibility.nl/english for making accessibilty audit.
News
~~~~

View File

@ -16,7 +16,7 @@
## Author's checklist
<!-- additional notes for reviewers -->
<!-- additional notes for reviewiers -->
## Related issues

View File

@ -1,7 +1,5 @@
.. SPDX-License-Identifier: AGPL-3.0-or-later
Searx is no longer maintained. Thank you for your support and all your contributions.
.. figure:: https://raw.githubusercontent.com/searx/searx/master/searx/static/themes/oscar/img/logo_searx_a.png
:target: https://searx.github.io/searx/
:alt: searX
@ -75,21 +73,28 @@ Frequently asked questions
Is searx in maintenance mode?
#############################
No, searx is no longer maintained.
No, searx is accepting new features, including new engines. We are also adding
engine fixes or other bug fixes when needed. Also, keep in mind that searx is
maintained by volunteers who work in their free time. So some changes might take
some time to be merged.
We reject features that might violate the privacy of users. If you really want
such a feature, it must be disabled by default and warn users about the consequances
of turning it off.
What is the difference between searx and SearxNG?
#################################################
TL;DR: SearXNG is for users that want more features and bugs getting fixed quicker.
If you prefer a minimalist software and stable experience, use searx.
TL;DR: If you want to run a public instance, go with SearxNG. If you want to
self host your own instance, choose searx.
SearxNG is a fork of searx, created by a former maintainer of searx. The fork
was created because the majority of the maintainers at the time did not find
the new proposed features privacy respecting enough. The most significant issue is with
engine metrics.
Searx is built for privacy conscious users. It comes with a unique set of
challenges. One of the problems we face is that users rather not report bugs,
Searx is built for privacy conscious users. It comes a unique set of
challanges. One of the problems we face is that users rather not report bugs,
because they do not want to publicly share what engines they use or what search
query triggered a problem. It is a challenge we accepted.
@ -119,8 +124,8 @@ instances locally, instead of using public instances.
Why should I use SearxNG?
#########################
SearxNG has rolling releases, dependencies updated more frequently, and engines are fixed
SearxNG has rolling releases, depencencies updated more frequently, and engines are fixed
faster. It is easy to set up your own public instance, and monitor its
performance and metrics. It is simple to maintain as an instance administrator.
perfomance and metrics. It is simple to maintain as an instance adminstrator.
As a user, it provides a prettier user interface and nicer experience.

View File

@ -100,7 +100,7 @@ update_conf() {
# There is a new version
if [ $FORCE_CONF_UPDATE -ne 0 ]; then
# Replace the current configuration
printf '⚠️ Automatically update %s to the new version\n' "${CONF}"
printf '⚠️ Automaticaly update %s to the new version\n' "${CONF}"
if [ ! -f "${OLD_CONF}" ]; then
printf 'The previous configuration is saved to %s\n' "${OLD_CONF}"
mv "${CONF}" "${OLD_CONF}"

View File

@ -9,7 +9,7 @@ workers = 4
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpreter config
# Plugin to use and interpretor config
single-interpreter = true
master = true
plugin = python3

View File

@ -1,129 +0,0 @@
=====================================
Run shell commands from your instance
=====================================
Command line engines are custom engines that run commands in the shell of the
host. In this article you can learn how to create a command engine and how to
customize the result display.
The command
===========
When specifyng commands, you must make sure the commands are available on the
searx host. Searx will not install anything for you. Also, make sure that the
``searx`` user on your host is allowed to run the selected command and has
access to the required files.
Access control
==============
Be careful when creating command engines if you are running a public
instance. Do not expose any sensitive information. You can restrict access by
configuring a list of access tokens under tokens in your ``settings.yml``.
Available settings
==================
* ``command``: A comma separated list of the elements of the command. A special
token ``{{QUERY}}`` tells searx where to put the search terms of the
user. Example: ``['ls', '-l', '-h', '{{QUERY}}']``
* ``query_type``: The expected type of user search terms. Possible values:
``path`` and ``enum``. ``path`` checks if the uesr provided path is inside the
working directory. If not the query is not executed. ``enum`` is a list of
allowed search terms. If the user submits something which is not included in
the list, the query returns an error.
* ``delimiter``: A dict containing a delimiter char and the "titles" of each
element in keys.
* ``parse_regex``: A dict containing the regular expressions for each result
key.
* ``query_enum``: A list containing allowed search terms if ``query_type`` is
set to ``enum``.
* ``working_dir``: The directory where the command has to be executed. Default:
``.``
* ``result_separator``: The character that separates results. Default: ``\n``
Customize the result template
=============================
There is a default result template for displaying key-value pairs coming from
command engines. If you want something more tailored to your result types, you
can design your own template.
Searx relies on `Jinja2 <https://jinja.palletsprojects.com/>`_ for
templating. If you are familiar with Jinja, you will not have any issues
creating templates. You can access the result attributes with ``{{
result.attribute_name }}``.
In the example below the result has two attributes: ``header`` and ``content``.
To customize their diplay, you need the following template (you must define
these classes yourself):
.. code:: html
<div class="result">
<div class="result-header">
{{ result.header }}
</div>
<div class="result-content">
{{ result.content }}
</div>
</div>
Then put your template under ``searx/templates/{theme-name}/result_templates``
named ``your-template-name.html``. You can select your custom template with the
option ``result_template``.
.. code:: yaml
- name: your engine name
engine: command
result_template: your-template-name.html
Examples
========
Find files by name
------------------
The first example is to find files on your searx host. It uses the command
`find` available on most Linux distributions. It expects a path type query. The
path in the search request must be inside the ``working_dir``.
The results are displayed with the default `key-value.html` template. A result
is displayed in a single row table with the key "line".
.. code:: yaml
- name : find
engine : command
command : ['find', '.', '-name', '{{QUERY}}']
query_type : path
shortcut : fnd
tokens : []
disabled : True
delimiter :
chars : ' '
keys : ['line']
Find files by contents
-----------------------
In the second example, we define an engine that searches in the contents of the
files under the ``working_dir``. The search type is not defined, so the user can
input any string they want. To restrict the input, you can set the ``query_type``
to ``enum`` and only allow a set of search terms to protect
yourself. Alternatively, make the engine private, so no one malevolent accesses
the engine.
.. code:: yaml
- name : regex search in files
engine : command
command : ['grep', '{{QUERY}}']
shortcut : gr
tokens : []
disabled : True
delimiter :
chars : ' '
keys : ['line']

View File

@ -37,7 +37,7 @@ Disabled **D** Engine type **ET**
------------- ----------- -------------------- ------------
Safe search **SS**
------------- ----------- ---------------------------------
Weight **W**
Weigth **W**
------------- ----------- ---------------------------------
Disabled **D**
------------- ----------- ---------------------------------
@ -86,60 +86,3 @@ Show errors **DE**
{% endfor %}
.. flat-table:: Additional engines (commented out in settings.yml)
:header-rows: 1
:stub-columns: 2
* - Name
- Base URL
- Host
- Port
- Paging
* - elasticsearch
- localhost:9200
-
-
- False
* - meilicsearch
- localhost:7700
-
-
- True
* - mongodb
-
- 127.0.0.1
- 21017
- True
* - mysql_server
-
- 127.0.0.1
- 3306
- True
* - postgresql
-
- 127.0.0.1
- 5432
- True
* - redis_server
-
- 127.0.0.1
- 6379
- False
* - solr
- localhost:8983
-
-
- True
* - sqlite
-
-
-
- True

View File

@ -39,7 +39,7 @@ Example
Scenario:
#. Recoll indexes a local filesystem mounted in ``/export/documents/reference``,
#. the Recoll search interface can be reached at https://recoll.example.org/ and
#. the Recoll search inteface can be reached at https://recoll.example.org/ and
#. the contents of this filesystem can be reached though https://download.example.org/reference
.. code:: yaml

View File

@ -19,9 +19,5 @@ Administrator documentation
filtron
morty
engines
private-engines
command-engine
indexer-engines
no-sql-engines
plugins
buildhosts

View File

@ -1,89 +0,0 @@
==================
Search in indexers
==================
Searx supports three popular indexer search engines:
* Elasticsearch
* Meilisearch
* Solr
Elasticsearch
=============
Make sure that the Elasticsearch user has access to the index you are querying.
If you are not using TLS during your connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : elasticsearch
shortcut : es
engine : elasticsearch
base_url : http://localhost:9200
username : elastic
password : changeme
index : my-index
query_type : match
enable_http : True
Available settings
------------------
* ``base_url``: URL of Elasticsearch instance. By default it is set to ``http://localhost:9200``.
* ``index``: Name of the index to query. Required.
* ``query_type``: Elasticsearch query method to use. Available: ``match``,
``simple_query_string``, ``term``, ``terms``, ``custom``.
* ``custom_query_json``: If you selected ``custom`` for ``query_type``, you must
provide the JSON payload in this option.
* ``username``: Username in Elasticsearch
* ``password``: Password for the Elasticsearch user
Meilisearch
===========
If you are not using TLS during connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : meilisearch
engine : meilisearch
shortcut: mes
base_url : http://localhost:7700
index : my-index
enable_http: True
Available settings
------------------
* ``base_url``: URL of the Meilisearch instance. By default it is set to http://localhost:7700
* ``index``: Name of the index to query. Required.
* ``auth_key``: Key required for authentication.
* ``facet_filters``: List of facets to search in.
Solr
====
If you are not using TLS during connection, set ``enable_http`` to ``True``.
.. code:: yaml
- name : solr
engine : solr
shortcut : slr
base_url : http://localhost:8983
collection : my-collection
sort : asc
enable_http : True
Available settings
------------------
* ``base_url``: URL of the Meilisearch instance. By default it is set to http://localhost:8983
* ``collection``: Name of the collection to query. Required.
* ``sort``: Sorting of the results. Available: ``asc``, ``desc``.
* ``rows``: Maximum number of results from a query. Default value: 10.
* ``field_list``: List of fields returned from the query.
* ``default_fields``: Default fields to query.
* ``query_fields``: List of fields with a boost factor. The bigger the boost
factor of a field, the more important the field is in the query. Example:
``qf="field1^2.3 field2"``

View File

@ -94,8 +94,8 @@ My experience is, that this command is a bit buggy.
.. _uwsgi configuration:
All together
============
Alltogether
===========
Create the configuration ini-file according to your distribution (see below) and
restart the uwsgi application.

View File

@ -1,170 +0,0 @@
===========================
Query SQL and NoSQL servers
===========================
SQL
===
SQL servers are traditional databases with predefined data schema. Furthermore,
modern versions also support BLOB data.
You can search in the following servers:
* `PostgreSQL`_
* `MySQL`_
* `SQLite`_
The configuration of the new database engines are similar. You must put a valid
SELECT SQL query in ``query_str``. At the moment you can only bind at most
one parameter in your query.
Do not include LIMIT or OFFSET in your SQL query as the engines
rely on these keywords during paging.
PostgreSQL
----------
Required PyPi package: ``psychopg2``
You can find an example configuration below:
.. code:: yaml
- name : postgresql
engine : postgresql
database : my_database
username : searx
password : password
query_str : 'SELECT * from my_table WHERE my_column = %(query)s'
shortcut : psql
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running PostgreSQL. By default it is ``127.0.0.1``.
* ``port``: Port number PostgreSQL is listening on. By default it is ``5432``.
* ``database``: Name of the database you are connecting to.
* ``username``: Name of the user connecting to the database.
* ``password``: Password of the database user.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
MySQL
-----
Required PyPi package: ``mysql-connector-python``
This is an example configuration for quering a MySQL server:
.. code:: yaml
- name : mysql
engine : mysql_server
database : my_database
username : searx
password : password
limit : 5
query_str : 'SELECT * from my_table WHERE my_column=%(query)s'
shortcut : mysql
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running MySQL. By default it is ``127.0.0.1``.
* ``port``: Port number MySQL is listening on. By default it is ``3306``.
* ``database``: Name of the database you are connecting to.
* ``auth_plugin``: Authentication plugin to use. By default it is ``caching_sha2_password``.
* ``username``: Name of the user connecting to the database.
* ``password``: Password of the database user.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
SQLite
------
You can read from your database ``my_database`` using this example configuration:
.. code:: yaml
- name : sqlite
engine : sqlite
shortcut: sq
database : my_database
query_str : 'SELECT * FROM my_table WHERE my_column=:query'
Available options
~~~~~~~~~~~~~~~~~
* ``database``: Name of the database you are connecting to.
* ``query_str``: Query string to run. Keywords like ``LIMIT`` and ``OFFSET`` are not allowed. Required.
* ``limit``: Number of returned results per page. By default it is 10.
NoSQL
=====
NoSQL data stores are used for storing arbitrary data without first defining their
structure. To query the supported servers, you must install their drivers using PyPi.
You can search in the following servers:
* `Redis`_
* `MongoDB`_
Redis
-----
Reqired PyPi package: ``redis``
Example configuration:
.. code:: yaml
- name : mystore
engine : redis_server
exact_match_only : True
host : 127.0.0.1
port : 6379
password : secret-password
db : 0
shortcut : rds
enable_http : True
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running Redis. By default it is ``127.0.0.1``.
* ``port``: Port number Redis is listening on. By default it is ``6379``.
* ``password``: Password if required by Redis.
* ``db``: Number of the database you are connecting to.
* ``exact_match_only``: Enable if you need exact matching. By default it is ``True``.
MongoDB
-------
Required PyPi package: ``pymongo``
Below is an example configuration for using a MongoDB collection:
.. code:: yaml
- name : mymongo
engine : mongodb
shortcut : icm
host : '127.0.0.1'
port : 27017
database : personal
collection : income
key : month
enable_http: True
Available options
~~~~~~~~~~~~~~~~~
* ``host``: IP address of the host running MongoDB. By default it is ``127.0.0.1``.
* ``port``: Port number MongoDB is listening on. By default it is ``27017``.
* ``password``: Password if required by Redis.
* ``database``: Name of the database you are connecting to.
* ``collection``: Name of the collection you want to search in.
* ``exact_match_only``: Enable if you need exact matching. By default it is ``True``.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

View File

@ -1,44 +0,0 @@
=============================
How to create private engines
=============================
If you are running your public searx instance, you might want to restrict access
to some engines. Maybe you are afraid of bots might abusing the engine. Or the
engine might return private results you do not want to share with strangers.
Server side configuration
=========================
You can make any engine private by setting a list of tokens in your settings.yml
file. In the following example, we set two different tokens that provide access
to the engine.
.. code:: yaml
- name: my-private-google
engine: google
shortcut: pgo
tokens: ['my-secret-token-1', 'my-secret-token-2']
To access the private engine, you must distribute the tokens to your searx
users. It is up to you how you let them know what the access token is you
created.
Client side configuration
=========================
As a searx instance user, you can add any number of access tokens on the
Preferences page. You have to set a comma separated lists of strings in "Engine
tokens" input, then save your new preferences.
.. image:: prefernces-private.png
:width: 600px
:align: center
:alt: location of token textarea
Once the Preferences page is loaded again, you can see the information of the
private engines you got access to. If you cannot see the expected engines in the
engines list, double check your token. If there is no issue with the token,
contact your instance administrator.

View File

@ -129,7 +129,7 @@ Global Settings
outgoing: # communication with search engines
request_timeout : 2.0 # default timeout in seconds, can be override by engine
# max_request_timeout: 10.0 # the maximum timeout in seconds
useragent_suffix : "" # information like an email address to the administrator
useragent_suffix : "" # informations like an email address to the administrator
pool_connections : 100 # Number of different hosts
pool_maxsize : 10 # Number of simultaneous requests by host
# uncomment below section if you want to use a proxy

View File

@ -1,48 +0,0 @@
=================================
Private searx project is finished
=================================
We are officially finished with the Private searx project. The goal was to
extend searx capabilities beyond just searching on the Internet. We added
support for offline engines. These engines do not connect to the Internet,
they find results locally.
As some of the offline engines run commands on the searx host, we added an
option to protect any engine by making them private. Private engines can only be
accessed using a token.
After searx was prepared to run offline queries we added numerous new engines:
1. Command line engine
2. MySQL
3. PostgreSQL
4. SQLite
5. Redis
6. MongoDB
We also added new engines that communicate over HTTP, but you might want to keep
them private:
1. Elasticsearch
2. Meilisearch
3. Solr
The last step was to document this work. We added new tutorials on creating
command engines, making engines private and also adding a custom result template
to your own engines.
Acknowledgement
===============
The project was sponsored by `Search and Discovery Fund`_ of `NLnet
Foundation`_. We would like to thank the NLnet for not only the funds, but the
conversations and their ideas. They were truly invested and passionate about
supporting searx.
.. _Search and Discovery Fund: https://nlnet.nl/discovery
.. _NLnet Foundation: https://nlnet.nl/
| Happy hacking.
| kvch // 2022.09.30 23:15

View File

@ -15,4 +15,3 @@ Blog
search-indexer-engines
sql-engines
search-database-engines
documentation-offline-engines

View File

@ -207,7 +207,7 @@ debug services from filtron and morty analogous use:
Another point we have to notice is that each service (:ref:`searx <searx.sh>`,
:ref:`filtron <filtron.sh>` and :ref:`morty <morty.sh>`) runs under dedicated
system user account with the same name (compare :ref:`create searx user`). To
get a shell from these accounts, simply call one of the scripts:
get a shell from theses accounts, simply call one of the scripts:
.. tabs::
@ -311,7 +311,7 @@ of the container:
Now we can develop as usual in the working tree of our desktop system. Every
time the software was changed, you have to restart the searx service (in the
container):
conatiner):
.. tabs::
@ -370,7 +370,7 @@ We build up a fully functional searx suite in a archlinux container:
$ sudo -H ./utils/lxc.sh install suite searx-archlinux
To access HTTP from the desktop we installed nginx for the services inside the
container:
conatiner:
.. tabs::

View File

@ -16,7 +16,7 @@ you can use your owm template by placing the template under
``searx/templates/{theme_name}/result_templates/{template_name}`` and setting
``result_template`` attribute to ``{template_name}``.
Furthermore, if you do not want to expose these engines on a public instance, you can
Futhermore, if you do not want to expose these engines on a public instance, you can
still add them and limit the access by setting ``tokens`` as described in the `blog post about
private engines`_.
@ -29,7 +29,7 @@ structure.
Redis
-----
Required package: ``redis``
Reqired package: ``redis``
Redis is a key value based data store usually stored in memory.

View File

@ -15,7 +15,7 @@ All of the engines above are added to ``settings.yml`` just commented out, as yo
Please note that if you are not using HTTPS to access these engines, you have to enable
HTTP requests by setting ``enable_http`` to ``True``.
Furthermore, if you do not want to expose these engines on a public instance, you can
Futhermore, if you do not want to expose these engines on a public instance, you can
still add them and limit the access by setting ``tokens`` as described in the `blog post about
private engines`_.
@ -57,7 +57,7 @@ small-scale (less than 10 million documents) data collections. E.g. it is great
web pages you have visited and searching in the contents later.
The engine supports faceted search, so you can search in a subset of documents of the collection.
Furthermore, you can search in Meilisearch instances that require authentication by setting ``auth_token``.
Futhermore, you can search in Meilisearch instances that require authentication by setting ``auth_token``.
Here is a simple example to query a Meilisearch instance:

View File

@ -62,7 +62,7 @@ Before enabling MySQL engine, you must install the package ``mysql-connector-pyt
The authentication plugin is configurable by setting ``auth_plugin`` in the attributes.
By default it is set to ``caching_sha2_password``.
This is an example configuration for querying a MySQL server:
This is an example configuration for quering a MySQL server:
.. code:: yaml

View File

@ -10,7 +10,7 @@ from searx.version import VERSION_STRING
# Project --------------------------------------------------------------
project = u'searx'
copyright = u'2015-2022, Adam Tauber, Noémi Ványi'
copyright = u'2015-2021, Adam Tauber, Noémi Ványi'
author = u'Adam Tauber'
release, version = VERSION_STRING, VERSION_STRING
highlight_language = 'none'
@ -101,11 +101,13 @@ imgmath_font_size = 14
html_theme_options = {"index_sidebar_logo": True}
html_context = {"project_links": [] }
html_context["project_links"].append(ProjectLink("Blog", brand.DOCS_URL + "/blog/index.html"))
html_context["project_links"].append(ProjectLink("Blog", "blog/index.html"))
if brand.GIT_URL:
html_context["project_links"].append(ProjectLink("Source", brand.GIT_URL))
if brand.WIKI_URL:
html_context["project_links"].append(ProjectLink("Wiki", brand.WIKI_URL))
if brand.PUBLIC_INSTANCES:
html_context["project_links"].append(ProjectLink("Public instances", brand.PUBLIC_INSTANCES))
if brand.TWITTER_URL:
html_context["project_links"].append(ProjectLink("Twitter", brand.TWITTER_URL))
if brand.ISSUE_URL:

View File

@ -41,7 +41,7 @@ engine file
argument type information
======================= =========== ========================================================
categories list pages, in which the engine is working
paging boolean support multiple pages
paging boolean support multible pages
time_range_support boolean support search time range
engine_type str ``online`` by default, other possibles values are
``offline``, ``online_dictionary``, ``online_currency``
@ -159,7 +159,7 @@ parsed arguments
----------------
The function ``def request(query, params):`` always returns the ``params``
variable. Inside searx, the following parameters can be used to specify a search
variable. Inside searx, the following paramters can be used to specify a search
request:
=================== =========== ==========================================================================

View File

@ -15,7 +15,7 @@ generated and deployed at :docs:`github.io <.>`. For build prerequisites read
:ref:`docs build`.
The source files of Searx's documentation are located at :origin:`docs`. Sphinx
assumes source files to be encoded in UTF-8 by default. Run :ref:`make docs.live
assumes source files to be encoded in UTF-8 by defaul. Run :ref:`make docs.live
<make docs.live>` to build HTML while editing.
.. sidebar:: Further reading
@ -227,13 +227,13 @@ To refer anchors use the `ref role`_ markup:
.. code:: reST
Visit chapter :ref:`reST anchor`. Or set hyperlink text manually :ref:`foo
Visit chapter :ref:`reST anchor`. Or set hyperlink text manualy :ref:`foo
bar <reST anchor>`.
.. admonition:: ``:ref:`` role
:class: rst-example
Visist chapter :ref:`reST anchor`. Or set hyperlink text manually :ref:`foo
Visist chapter :ref:`reST anchor`. Or set hyperlink text manualy :ref:`foo
bar <reST anchor>`.
.. _reST ordinary ref:
@ -494,8 +494,8 @@ Figures & Images
is flexible. To get best results in the generated output format, install
ImageMagick_ and Graphviz_.
Searx's sphinx setup includes: :ref:`linuxdoc:kfigure`. Scalable here means;
scalable in sense of the build process. Normally in absence of a converter
Searx's sphinx setup includes: :ref:`linuxdoc:kfigure`. Scaleable here means;
scaleable in sense of the build process. Normally in absence of a converter
tool, the build process will break. From the authors POV its annoying to care
about the build process when handling with images, especially since he has no
access to the build process. With :ref:`linuxdoc:kfigure` the build process
@ -503,7 +503,7 @@ continues and scales output quality in dependence of installed image processors.
If you want to add an image, you should use the ``kernel-figure`` (inheritance
of :dudir:`figure`) and ``kernel-image`` (inheritance of :dudir:`image`)
directives. E.g. to insert a figure with a scalable image format use SVG
directives. E.g. to insert a figure with a scaleable image format use SVG
(:ref:`svg image example`):
.. code:: reST
@ -1185,7 +1185,7 @@ and *targets* (e.g. a ref to :ref:`row 2 of table's body <row body 2>`).
- cell 4.4
* - row 5
- cell 5.1 with automatic span to right end
- cell 5.1 with automatic span to rigth end
* - row 6
- cell 6.1
@ -1237,7 +1237,7 @@ and *targets* (e.g. a ref to :ref:`row 2 of table's body <row body 2>`).
- cell 4.4
* - row 5
- cell 5.1 with automatic span to right end
- cell 5.1 with automatic span to rigth end
* - row 6
- cell 6.1

View File

@ -8,6 +8,9 @@ Searx is a free internet metasearch engine which aggregates results from more
than 70 search services. Users are neither tracked nor profiled. Additionally,
searx can be used over Tor for online anonymity.
Get started with searx by using one of the Searx-instances_. If you don't trust
anyone, you can set up your own, see :ref:`installation`.
.. sidebar:: Features
- Self hosted
@ -30,3 +33,5 @@ searx can be used over Tor for online anonymity.
searx_extra/index
utils/index
blog/index
.. _Searx-instances: https://searx.space

View File

@ -17,7 +17,7 @@ Prefix: ``:``
Prefix: ``?``
to add engines and categories to the currently selected categories
Abbreviations of the engines and languages are also accepted. Engine/category
Abbrevations of the engines and languages are also accepted. Engine/category
modifiers are chainable and inclusive (e.g. with :search:`!it !ddg !wp qwer
<?q=%21it%20%21ddg%20%21wp%20qwer>` search in IT category **and** duckduckgo
**and** wikipedia for ``qwer``).

9
manage
View File

@ -188,11 +188,13 @@ docker.build() {
die 1 "there is no remote origin"
fi
# This is a git repository
# "git describe" to get the Docker version (for example : v0.15.0-89-g0585788e)
# awk to remove the "v" and the "g"
SEARX_GIT_VERSION=$(git describe --tags | awk -F'-' '{OFS="-"; $1=substr($1, 2); if ($3) { $3=substr($3, 2); } print}')
SEARX_GIT_VERSION=$(git describe --match "v[0-9]*\.[0-9]*\.[0-9]*" HEAD 2>/dev/null | awk -F'-' '{OFS="-"; $1=substr($1, 2); if ($3) { $3=substr($3, 2); } print}')
# add the suffix "-dirty" if the repository has uncommitted change
# add the suffix "-dirty" if the repository has uncommited change
# /!\ HACK for searx/searx: ignore utils/brand.env
git update-index -q --refresh
if [ ! -z "$(git diff-index --name-only HEAD -- | grep -v 'utils/brand.env')" ]; then
@ -284,6 +286,9 @@ node.env() {
which npm &> /dev/null || die 1 'node.env - npm is not found!'
( set -e
# shellcheck disable=SC2030
PATH="$(npm bin):$PATH"
export PATH
build_msg INSTALL "npm install $NPM_PACKAGES"
# shellcheck disable=SC2086

View File

@ -1,19 +1,19 @@
mock==5.0.1
mock==4.0.3
nose2[coverage_plugin]==0.12.0
cov-core==1.15.0
pycodestyle==2.10.0
pylint==2.15.9
splinter==0.19.0
pycodestyle==2.9.1
pylint==2.14.5
splinter==0.18.1
transifex-client==0.14.3; python_version < '3.10'
transifex-client==0.12.5; python_version == '3.10'
selenium==4.8.3
twine==4.0.2
Pallets-Sphinx-Themes==2.0.3
selenium==4.3.0
twine==4.0.1
Pallets-Sphinx-Themes==2.0.2
docutils==0.18
Sphinx==5.3.0
Sphinx==5.1.1
sphinx-issues==3.0.1
sphinx-jinja==2.0.2
sphinx-tabs==3.4.1
sphinxcontrib-programoutput==0.17
sphinx-autobuild==2021.3.14
linuxdoc==20221127
linuxdoc==20211220

View File

@ -1,13 +1,13 @@
Brotli==1.0.9
babel==2.11.0
certifi==2022.12.7
babel==2.10.3
certifi==2022.6.15
flask-babel==2.0.0
flask==2.2.2
flask==2.2.1
jinja2==3.1.2
langdetect==1.0.9
lxml==4.9.2
lxml==4.9.1
pygments==2.12.0
python-dateutil==2.8.2
pyyaml==6.0
requests[socks]==2.28.2
setproctitle==1.3.2
requests[socks]==2.28.1
setproctitle==1.3.1

View File

@ -41,11 +41,11 @@ if settings['ui']['static_path']:
'''
enable debug if
the environment variable SEARX_DEBUG is 1 or true
the environnement variable SEARX_DEBUG is 1 or true
(whatever the value in settings.yml)
or general.debug=True in settings.yml
disable debug if
the environment variable SEARX_DEBUG is 0 or false
the environnement variable SEARX_DEBUG is 0 or false
(whatever the value in settings.yml)
or general.debug=False in settings.yml
'''

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,8 @@
{
"versions": [
"111.0.1",
"111.0",
"110.0.1",
"110.0"
"102.0",
"101.0.1",
"101.0"
],
"os": [
"Windows NT 10.0; WOW64",

View File

@ -153,6 +153,7 @@
"Q107164998": "cd mm²/m²",
"Q107210119": "g/s",
"Q107210344": "mg/s",
"Q107213614": "kJ/100g",
"Q107226391": "cm⁻¹",
"Q1072404": "K",
"Q107244316": "mm⁻¹",
@ -207,38 +208,16 @@
"Q1091257": "tex",
"Q1092296": "a",
"Q110143852": "Ω cm",
"Q110143896": "cm³/g",
"Q1104069": "$",
"Q11061003": "μm²",
"Q11061005": "nm²",
"Q110742003": "dppx",
"Q1131660": "st",
"Q1137675": "cr",
"Q114002440": "𒄀",
"Q114002534": "𒃻",
"Q114002639": "𒈨𒊑",
"Q114002796": "𒂆",
"Q114002930": "𒀺",
"Q114002955": "𒀹𒃷",
"Q114002974": "𒃷",
"Q1140444": "Zb",
"Q1140577": "Yb",
"Q114589269": "A",
"Q1152074": "Pb",
"Q1152323": "Tb",
"Q115277430": "QB",
"Q115280832": "RB",
"Q115359862": "qg",
"Q115359863": "rg",
"Q115359865": "Rg",
"Q115359866": "Qg",
"Q115359910": "Rm",
"Q115533751": "rm",
"Q115533764": "qm",
"Q115533776": "Qm",
"Q116432446": "ᵐ",
"Q116432563": "ˢ",
"Q116443090": "ʰ",
"Q1165799": "mil",
"Q11776930": "Mg",
"Q11830636": "psf",
@ -257,14 +236,12 @@
"Q12257695": "Eb/s",
"Q12257696": "EB/s",
"Q12261466": "kB/s",
"Q12263659": "mgal",
"Q12265780": "Pb/s",
"Q12265783": "PB/s",
"Q12269121": "Yb/s",
"Q12269122": "YB/s",
"Q12269308": "Zb/s",
"Q12269309": "ZB/s",
"Q1238720": "vols.",
"Q1247300": "cm H₂O",
"Q12714022": "cwt",
"Q12789864": "GeV",
@ -305,6 +282,7 @@
"Q14914907": "th",
"Q14916719": "Gpc",
"Q14923662": "Pm³",
"Q1511773": "LSd",
"Q15120301": "l atm",
"Q1542309": "xu",
"Q1545979": "ft³",
@ -326,6 +304,7 @@
"Q17255465": "v_P",
"Q173117": "R$",
"Q1741429": "kpm",
"Q174467": "Lm",
"Q174728": "cm",
"Q174789": "mm",
"Q175821": "μm",
@ -349,11 +328,13 @@
"Q182429": "m/s",
"Q1826195": "dl",
"Q18413919": "cm/s",
"Q184172": "F",
"Q185078": "a",
"Q185153": "erg",
"Q185648": "Torr",
"Q185759": "span",
"Q1872619": "zs",
"Q189097": "₧",
"Q190095": "Gy",
"Q19017495": "mm²",
"Q190951": "S$",
@ -369,7 +350,6 @@
"Q194339": "B$",
"Q1970718": "mam",
"Q1972579": "pdl",
"Q19877834": "cd-ft",
"Q199462": "LE",
"Q199471": "Afs",
"Q200323": "dm",
@ -408,7 +388,7 @@
"Q211256": "mi/h",
"Q21154419": "PD",
"Q211580": "BTU (th)",
"Q212120": "Ah",
"Q212120": "A h",
"Q213005": "G$",
"Q2140397": "in³",
"Q214377": "ell",
@ -448,6 +428,7 @@
"Q23931040": "dam²",
"Q23931103": "nmi²",
"Q240468": "syr£",
"Q2414435": "$b.",
"Q242988": "Lib$",
"Q2438073": "ag",
"Q2448803": "mV",
@ -565,7 +546,7 @@
"Q3773454": "Mpc",
"Q3815076": "Kib",
"Q3833309": "£",
"Q3858002": "mAh",
"Q3858002": "mA h",
"Q3867152": "ft/s²",
"Q389062": "Tib",
"Q3902688": "pl",
@ -626,8 +607,6 @@
"Q53393868": "GJ",
"Q53393886": "PJ",
"Q53393890": "EJ",
"Q53393893": "ZJ",
"Q53393898": "YJ",
"Q53448786": "yHz",
"Q53448790": "zHz",
"Q53448794": "fHz",
@ -641,7 +620,6 @@
"Q53448826": "hHz",
"Q53448828": "yJ",
"Q53448832": "zJ",
"Q53448835": "fJ",
"Q53448842": "pJ",
"Q53448844": "nJ",
"Q53448847": "μJ",
@ -704,7 +682,6 @@
"Q53951982": "Mt",
"Q53952048": "kt",
"Q54006645": "ZWb",
"Q54081354": "ZT",
"Q54081925": "ZSv",
"Q54082468": "ZS",
"Q54083144": "ZΩ",
@ -729,6 +706,8 @@
"Q56157046": "nmol",
"Q56157048": "pmol",
"Q56160603": "fmol",
"Q56302633": "UM",
"Q56317116": "mgal",
"Q56317622": "Q_P",
"Q56318907": "kbar",
"Q56349362": "Bs.S",
@ -1205,10 +1184,10 @@
"Q11570": "kg",
"Q11573": "m",
"Q11574": "s",
"Q11579": "K",
"Q11582": "L",
"Q12129": "pc",
"Q12438": "N",
"Q16068": "DM",
"Q1811": "AU",
"Q20764": "Ma",
"Q2101": "e",

View File

@ -142,7 +142,7 @@ def load_engine(engine_data):
engine.stats = {
'sent_search_count': 0, # sent search
'search_count': 0, # successful search
'search_count': 0, # succesful search
'result_count': 0,
'engine_time': 0,
'engine_time_count': 0,
@ -171,7 +171,7 @@ def load_engine(engine_data):
categories.setdefault(category_name, []).append(engine)
if engine.shortcut in engine_shortcuts:
logger.error('Engine config error: ambiguous shortcut: {0}'.format(engine.shortcut))
logger.error('Engine config error: ambigious shortcut: {0}'.format(engine.shortcut))
sys.exit(1)
engine_shortcuts[engine.shortcut] = engine.name

View File

@ -52,7 +52,8 @@ def request(query, params):
offset=offset)
params['url'] = base_url + search_path
params['headers']['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
params['headers']['User-Agent'] = ('Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36')
return params
@ -67,13 +68,11 @@ def response(resp):
for result in eval_xpath(dom, '//div[@class="sa_cc"]'):
link = eval_xpath(result, './/h3/a')[0]
url = link.attrib.get('href')
pretty_url = extract_text(eval_xpath(result, './/cite'))
title = extract_text(link)
content = extract_text(eval_xpath(result, './/p'))
# append result
results.append({'url': url,
'pretty_url': pretty_url,
'title': title,
'content': content})
@ -81,13 +80,11 @@ def response(resp):
for result in eval_xpath(dom, '//li[@class="b_algo"]'):
link = eval_xpath(result, './/h2/a')[0]
url = link.attrib.get('href')
pretty_url = extract_text(eval_xpath(result, './/cite'))
title = extract_text(link)
content = extract_text(eval_xpath(result, './/p'))
# append result
results.append({'url': url,
'pretty_url': pretty_url,
'title': title,
'content': content})

View File

@ -70,7 +70,7 @@ def request(query, params):
if params['time_range'] in time_range_dict:
params['url'] += time_range_string.format(interval=time_range_dict[params['time_range']])
# bing videos did not like "older" versions < 70.0.1 when selecting other
# bing videos did not like "older" versions < 70.0.1 when selectin other
# languages then 'en' .. very strange ?!?!
params['headers']['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64; rv:73.0.1) Gecko/20100101 Firefox/73.0.1'

View File

@ -18,7 +18,7 @@ from searx.poolrequests import get
# about
about = {
"website": 'https://lite.duckduckgo.com/lite/',
"website": 'https://lite.duckduckgo.com/lite',
"wikidata_id": 'Q12805',
"official_api_documentation": 'https://duckduckgo.com/api',
"use_official_api": False,
@ -45,7 +45,7 @@ language_aliases = {
time_range_dict = {'day': 'd', 'week': 'w', 'month': 'm', 'year': 'y'}
# search-url
url = 'https://lite.duckduckgo.com/lite/'
url = 'https://lite.duckduckgo.com/lite'
url_ping = 'https://duckduckgo.com/t/sl_l'
@ -73,9 +73,6 @@ def request(query, params):
# link again and again ..
params['headers']['Content-Type'] = 'application/x-www-form-urlencoded'
params['headers']['Origin'] = 'https://lite.duckduckgo.com'
params['headers']['Referer'] = 'https://lite.duckduckgo.com/'
params['headers']['User-Agent'] = 'Mozilla/5.0'
# initial page does not have an offset
if params['pageno'] == 2:

View File

@ -80,7 +80,7 @@ def response(resp):
# * book / performing art / film / television / media franchise / concert tour / playwright
# * prepared food
# * website / software / os / programming language / file format / software engineer
# * company
# * compagny
content = ''
heading = search_res.get('Heading', '')

View File

@ -40,7 +40,7 @@ def response(resp):
search_res = loads(resp.text)
# check if items are received
# check if items are recieved
if 'items' not in search_res:
return []

View File

@ -109,15 +109,22 @@ filter_mapping = {
# specific xpath variables
# ------------------------
results_xpath = '//div[contains(@class, "MjjYud")]'
title_xpath = './/h3[1]'
href_xpath = './/a/@href'
content_xpath = './/div[@data-sncf]'
# google results are grouped into <div class="jtfYYd ..." ../>
results_xpath = '//div[contains(@class, "jtfYYd")]'
results_xpath_mobile_ui = '//div[contains(@class, "g ")]'
# google *sections* are no usual *results*, we ignore them
g_section_with_header = './g-section-with-header'
# the title is a h3 tag relative to the result group
title_xpath = './/h3[1]'
# in the result group there is <div class="yuRUbf" ../> it's first child is a <a
# href=...>
href_xpath = './/div[@class="yuRUbf"]//a/@href'
# in the result group there is <div class="VwiC3b ..." ../> containing the *content*
content_xpath = './/div[contains(@class, "VwiC3b")]'
# Suggestions are links placed in a *card-section*, we extract only the text
# from the links not the links itself.
@ -206,8 +213,7 @@ def request(query, params):
additional_parameters = {}
if use_mobile_ui:
additional_parameters = {
'asearch': 'arc',
'async': 'use_ac:true,_fmt:html',
'async': 'use_ac:true,_fmt:pc',
}
# https://www.google.de/search?q=corona&hl=de&lr=lang_de&start=0&tbs=qdr%3Ad&safe=medium
@ -282,7 +288,7 @@ def response(resp):
# google *sections*
if extract_text(eval_xpath(result, g_section_with_header)):
logger.debug("ignoring <g-section-with-header>")
logger.debug("ingoring <g-section-with-header>")
continue
try:

View File

@ -2,7 +2,7 @@
"""Google (News)
For detailed description of the *REST-full* API see: `Query Parameter
Definitions`_. Not all parameters can be applied:
Definitions`_. Not all parameters can be appied:
- num_ : the number of search results is ignored
- save_ : is ignored / Google-News results are always *SafeSearch*
@ -155,7 +155,7 @@ def response(resp):
padding = (4 -(len(jslog) % 4)) * "="
jslog = b64decode(jslog + padding)
except binascii.Error:
# URL can't be read, skip this result
# URL cant be read, skip this result
continue
# now we have : b'[null, ... null,"https://www.cnn.com/.../index.html"]'

View File

@ -2,7 +2,7 @@
"""Google (Video)
For detailed description of the *REST-full* API see: `Query Parameter
Definitions`_. Not all parameters can be applied.
Definitions`_. Not all parameters can be appied.
.. _admonition:: Content-Security-Policy (CSP)
@ -163,7 +163,7 @@ def response(resp):
# google *sections*
if extract_text(eval_xpath(result, g_section_with_header)):
logger.debug("ignoring <g-section-with-header>")
logger.debug("ingoring <g-section-with-header>")
continue
title = extract_text(eval_xpath_getindex(result, title_xpath, 0))

View File

@ -1,93 +0,0 @@
from urllib.parse import urlencode
import json
import re
from datetime import datetime
# config
categories = ['general', 'images', 'music', 'videos']
paging = True
time_range_support = True
# url
base_url = 'https://api.ipfs-search.com/v1/'
search_string = 'search?{query} first-seen:{time_range} metadata.Content-Type:({mime_type})&page={page} '
mime_types_map = {
'general': "*",
'images': 'image*',
'music': 'audio*',
'videos': 'video*'
}
time_range_map = {'day': '[ now-24h\/h TO *]',
'week': '[ now\/h-7d TO *]',
'month': '[ now\/d-30d TO *]',
'year': '[ now\/d-1y TO *]'}
ipfs_url = 'https://gateway.ipfs.io/ipfs/{hash}'
def request(query, params):
mime_type = mime_types_map.get(params['category'], '*')
time_range = time_range_map.get(params['time_range'], '*')
search_path = search_string.format(
query=urlencode({'q': query}),
time_range=time_range,
page=params['pageno'],
mime_type=mime_type)
params['url'] = base_url + search_path
return params
def clean_html(text):
if not text:
return ""
return str(re.sub(re.compile('<.*?>'), '', text))
def create_base_result(record):
url = ipfs_url.format(hash=record.get('hash'))
title = clean_html(record.get('title'))
published_date = datetime.strptime(record.get('first-seen'), '%Y-%m-%dT%H:%M:%SZ')
return {'url': url,
'title': title,
'publishedDate': published_date}
def create_text_result(record):
result = create_base_result(record)
description = clean_html(record.get('description'))
result['description'] = description
return result
def create_image_result(record):
result = create_base_result(record)
result['img_src'] = result['url']
result['template'] = 'images.html'
return result
def create_video_result(record):
result = create_base_result(record)
result['thumbnail'] = ''
result['template'] = 'videos.html'
return result
def response(resp):
api_results = json.loads(resp.text)
results = []
for result in api_results.get('hits', []):
mime_type = result.get('mimetype', 'text/plain')
if mime_type.startswith('image'):
results.append(create_image_result(result))
elif mime_type.startswith('video'):
results.append(create_video_result(result))
else:
results.append(create_text_result(result))
return results

View File

@ -1,55 +0,0 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
"""
Omnom (General)
"""
from json import loads
from urllib.parse import urlencode
# about
about = {
"website": 'https://github.com/asciimoo/omnom',
"wikidata_id": None,
"official_api_documentation": 'http://your.omnom.host/api',
"use_official_api": True,
"require_api_key": False,
"results": 'JSON',
}
# engine dependent config
categories = ['general']
paging = True
# search-url
base_url = None
search_path = 'bookmarks?{query}&pageno={pageno}&format=json'
bookmark_path = 'bookmark?id='
# do search-request
def request(query, params):
params['url'] = base_url +\
search_path.format(query=urlencode({'query': query}),
pageno=params['pageno'])
return params
# get response from search-request
def response(resp):
results = []
json = loads(resp.text)
# parse results
for r in json.get('Bookmarks', {}):
content = r['url']
if r.get('notes'):
content += ' - ' + r['notes']
results.append({
'title': r['title'],
'content': content,
'url': base_url + bookmark_path + str(r['id']),
})
# return results
return results

View File

@ -72,7 +72,7 @@ def response(resp):
elif properties.get('osm_type') == 'R':
osm_type = 'relation'
else:
# continue if invalid osm-type
# continue if invalide osm-type
continue
url = result_base_url.format(osm_type=osm_type,

View File

@ -71,7 +71,7 @@ def response(resp):
if 'downloadUrl' in result:
new_result['torrentfile'] = result['downloadUrl']
# magnet link *may* be in guid, but it may be also identical to infoUrl
# magnet link *may* be in guid, but it may be also idential to infoUrl
if 'guid' in result and isinstance(result['guid'], str) and result['guid'].startswith('magnet'):
new_result['magnetlink'] = result['guid']

View File

@ -23,7 +23,6 @@ categories = ['music']
paging = True
api_client_id = None
api_client_secret = None
timeout = 10.0
# search-url
url = 'https://api.spotify.com/'
@ -41,10 +40,9 @@ def request(query, params):
r = requests.post(
'https://accounts.spotify.com/api/token',
timeout=timeout,
data={'grant_type': 'client_credentials'},
headers={'Authorization': 'Basic ' + base64.b64encode(
"{}:{}".format(api_client_id, api_client_secret).encode(),
"{}:{}".format(api_client_id, api_client_secret).encode()
).decode()}
)
j = loads(r.text)

View File

@ -51,7 +51,7 @@ search_url = base_url + 'sp/search?'
# specific xpath variables
# ads xpath //div[@id="results"]/div[@id="sponsored"]//div[@class="result"]
# not ads: div[@class="result"] are the direct children of div[@id="results"]
# not ads: div[@class="result"] are the direct childs of div[@id="results"]
results_xpath = '//div[@class="w-gl__result__main"]'
link_xpath = './/a[@class="w-gl__result-title result-link"]'
content_xpath = './/p[@class="w-gl__description"]'
@ -91,13 +91,15 @@ def get_sc_code(headers):
dom = html.fromstring(resp.text)
try:
sc_code = eval_xpath(dom, '//input[@name="sc"]')[0].get('value')
# href --> '/?sc=adrKJMgF8xwp20'
href = eval_xpath(dom, '//a[@class="footer-home__logo"]')[0].get('href')
except IndexError as exc:
# suspend startpage API --> https://github.com/searxng/searxng/pull/695
raise SearxEngineResponseException(
suspended_time=7 * 24 * 3600, message="PR-695: query new sc time-stamp failed!"
) from exc
sc_code = href[5:]
sc_code_ts = time()
logger.debug("new value is: %s", sc_code)
@ -214,7 +216,7 @@ def _fetch_supported_languages(resp):
# native name, the English name of the writing script used by the language,
# or occasionally something else entirely.
# this cases are so special they need to be hardcoded, a couple of them are misspellings
# this cases are so special they need to be hardcoded, a couple of them are mispellings
language_names = {
'english_uk': 'en-GB',
'fantizhengwen': ['zh-TW', 'zh-HK'],

View File

@ -49,7 +49,7 @@ WIKIDATA_PROPERTIES = {
# SERVICE wikibase:label: https://en.wikibooks.org/wiki/SPARQL/SERVICE_-_Label#Manual_Label_SERVICE
# https://en.wikibooks.org/wiki/SPARQL/WIKIDATA_Precision,_Units_and_Coordinates
# https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format#Data_model
# optimization:
# optmization:
# * https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/query_optimization
# * https://github.com/blazegraph/database/wiki/QueryHints
QUERY_TEMPLATE = """
@ -335,7 +335,7 @@ def get_attributes(language):
add_amount('P2046') # area
add_amount('P281') # postal code
add_label('P38') # currency
add_amount('P2048') # height (building)
add_amount('P2048') # heigth (building)
# Media
for p in ['P400', # platform (videogames, computing)

View File

@ -50,7 +50,7 @@ def request(query, params):
# replace private user area characters to make text legible
def replace_pua_chars(text):
pua_chars = {'\uf522': '\u2192', # right arrow
pua_chars = {'\uf522': '\u2192', # rigth arrow
'\uf7b1': '\u2115', # set of natural numbers
'\uf7b4': '\u211a', # set of rational numbers
'\uf7b5': '\u211d', # set of real numbers

View File

@ -35,7 +35,7 @@ time_range_support = False
time_range_url = '&hours={time_range_val}'
'''Time range URL parameter in the in :py:obj:`search_url`. If no time range is
requested by the user, the URL parameter is an empty string. The
requested by the user, the URL paramter is an empty string. The
``{time_range_val}`` replacement is taken from the :py:obj:`time_range_map`.
.. code:: yaml

View File

@ -30,7 +30,7 @@ def get_external_url(url_id, item_id, alternative="default"):
"""Return an external URL or None if url_id is not found.
url_id can take value from data/external_urls.json
The "imdb_id" value is automatically converted according to the item_id value.
The "imdb_id" value is automaticaly converted according to the item_id value.
If item_id is None, the raw URL with the $1 is returned.
"""

View File

@ -78,7 +78,7 @@ def load_single_https_ruleset(rules_path):
rules = []
exclusions = []
# parse children from ruleset
# parse childs from ruleset
for ruleset in root:
# this child define a target
if ruleset.tag == 'target':

View File

@ -2435,7 +2435,7 @@
<rule from="^http://widgets\.yahoo\.com/[^?]*"
to="https://www.yahoo.com/" />
<rule from="^http://((?:\w\w|fr-ca\.actualites|address|\w\w\.address|admanager|(?:\w\w|global)\.adserver|adspecs|\w+\.adspecs|\w+\.adspecs-new|advertising|\w\w\.advertising|beap\.adx|c5a?\.ah|(?:s-)?cookex\.amp|(?:[aosz]|apac|y3?)\.analytics|anc|answers|(?:\w\w|espanol|malaysia)\.answers|antispam|\w\w\.antispam|vn\.antoan|au\.apps|global\.ard|astrology|\w\w\.astrology|hk\.(?:(?:info|f1\.master|f1\.page|search|store|edit\.store|user)\.)?auctions|autos|\w\w\.autos|ar\.ayuda|(?:clicks\.beap|csc\.beap|pn1|row|us)\.bc|tw\.bid|tw\.(?:campaign|master|mb|page|search|store|user)\.bid|(?:m\.)?tw\.bigdeals|tw\.billing|biz|boss|(?:tw\.partner|tw)\.buy|(?:\w\w\.)?calendar|careers|\w\w\.cars|(?:\w\w|es-us)\.celebridades|(?:\w\w\.)?celebrity|tw\.charity|i?chart|(?:\w\w|es-us)\.cine|\w\w\.cinema|(?:\w\w|es-us)\.clima|migration\.cn|(?:developers\.)?commercecentral|br\.contribuidores|(?:uk\.)?contributor|au\.dating|(?:\w\w|es-us)\.deportes|developer|tw\.dictionary|dir|downloads|s-b\.dp|(?:eu\.|na\.|sa\.|tw\.)?edit|tw\.(?:ysm\.)?emarketing|en-maktoob|\w\w\.entertainment|espanol|edit\.europe|eurosport|(?:de|es|it|uk)\.eurosport|everything|\w\w\.everything|\w+\.fantasysports|au\.fango|tw\.fashion|br\.financas|finance|(?:\w\w|tw\.chart|espanol|tw\.futures|streamerapi)\.finance|(?:\w\w|es-us)\.finanzas|nz\.rss\.food|nz\.forums|games|(?:au|ca|uk)\.games|geo|gma|groups|(?:\w\w|asia|espanol|es-us|fr-ca|moderators)\.groups|health|help|(?:\w\w|secure)\.help|homes|(?:tw|tw\.v2)\.house|info|\w\w\.info|tw\.tool\.ks|au\.launch|legalredirect|(?:\w\w)\.lifestyle|(?:gh\.bouncer\.)?login|us\.l?rd|local|\w\w\.local|m|r\.m|\w\w\.m|mail|(?:\w\w\.overview|[\w-]+(?:\.c\.yom)?)\.mail|maktoob|malaysia|tw\.(?:user\.)?mall|maps|(?:\w\w|espanol|sgws2)\.maps|messenger|(?:\w\w|malaysia)\.messenger|\w\w\.meteo|mlogin|mobile|(?:\w\w|espanol|malaysia)\.mobile|tw\.(?:campaign\.)?money|tw\.movie|movies|(?:au|ca|nz|au\.rss|nz\.rss|tw|uk)\.movies|[\w.-]+\.msg|(?:\w\w|es-us)\.mujer|music|ca\.music|[\w-]+\.musica|my|us\.my|de\.nachrichten|ucs\.netsvs|news|(?:au|ca|fr|gr|hk|in|nz|ph|nz\.rss|sg|tw|uk)\.news|cookiex\.ngd|(?:\w\w|es-us)\.noticias|omg|(?:\w\w|es-us)\.omg|au\.oztips|rtb\.pclick|pilotx1|pipes|play|playerio|privacy|profile|tw\.promo|(?:au|hk|nz)\.promotions|publishing|(?:analytics|mailapps|media|ucs|us-locdrop|video)\.query|hk\.rd|(?:\w\w\.|fr-ca\.)?safely|screen|(?:\w\w|es-us)\.screen|scribe|search|(?:\w\w|w\w\.blog|\w\w\.dictionary|finance|\w\w\.finance|images|\w\w\.images|\w\w\.knowledge|\w\w\.lifestyle|\w\w\.local|malaysia|movies|\w\w\.movies|news|\w\w\.news|malaysia\.news|r|recipes|\w\w\.recipes|shine|shopping|\w\w\.shopping|sports|\w\w\.sports|tools|au\.tv|video|\w\w\.video|malaysia\.video)\.search|sec|rtb\.pclick\.secure|security|tw\.security|\w\w\.seguranca|\w\w\.seguridad|es-us\.seguridad|\w\w\.seguro|tw\.serviceplus|settings|shine|ca\.shine|shopping|ca\.shopping|\w+\.sitios|dashboard\.slingstone|(?:au\.|order\.)?smallbusiness|smarttv|rd\.software|de\.spiele|sports|(?:au|ca|fr|hk|nz|ph|profiles|au\.rss|nz\.rss|tw)\.sports|tw\.stock|au\.thehype|\w\w\.tiempo|es\.todo|toolbar|(?:\w\w|data|malaysia)\.toolbar|(?:au|nz)\.totaltravel|transparency|travel|tw\.travel||tv|(?:ar|au|de|fr|es|es-us|it|mx|nz|au\.rss|uk)\.tv|tw\.uwant|(?:mh|nz|qos|yep)\.video|weather|(?:au|ca|hk|in|nz|sg|ph|uk|us)\.weather|de\.wetter|www|au\.yel|video\.media\.yql|dmros\.ysm)\.)?yahoo\.com/"
<rule from="^http://((?:\w\w|fr-ca\.actualites|address|\w\w\.address|admanager|(?:\w\w|global)\.adserver|adspecs|\w+\.adspecs|\w+\.adspecs-new|advertising|\w\w\.advertising|beap\.adx|c5a?\.ah|(?:s-)?cookex\.amp|(?:[aosz]|apac|y3?)\.analytics|anc|answers|(?:\w\w|espanol|malaysia)\.answers|antispam|\w\w\.antispam|vn\.antoan|au\.apps|global\.ard|astrology|\w\w\.astrology|hk\.(?:(?:info|f1\.master|f1\.page|search|store|edit\.store|user)\.)?auctions|autos|\w\w\.autos|ar\.ayuda|(?:clicks\.beap|csc\.beap|pn1|row|us)\.bc|tw\.bid|tw\.(?:campaign|master|mb|page|search|store|user)\.bid|(?:m\.)?tw\.bigdeals|tw\.billing|biz|boss|(?:tw\.partner|tw)\.buy|(?:\w\w\.)?calendar|careers|\w\w\.cars|(?:\w\w|es-us)\.celebridades|(?:\w\w\.)?celebrity|tw\.charity|i?chart|(?:\w\w|es-us)\.cine|\w\w\.cinema|(?:\w\w|es-us)\.clima|migration\.cn|(?:deveopers\.)?commercecentral|br\.contribuidores|(?:uk\.)?contributor|au\.dating|(?:\w\w|es-us)\.deportes|developer|tw\.dictionary|dir|downloads|s-b\.dp|(?:eu\.|na\.|sa\.|tw\.)?edit|tw\.(?:ysm\.)?emarketing|en-maktoob|\w\w\.entertainment|espanol|edit\.europe|eurosport|(?:de|es|it|uk)\.eurosport|everything|\w\w\.everything|\w+\.fantasysports|au\.fango|tw\.fashion|br\.financas|finance|(?:\w\w|tw\.chart|espanol|tw\.futures|streamerapi)\.finance|(?:\w\w|es-us)\.finanzas|nz\.rss\.food|nz\.forums|games|(?:au|ca|uk)\.games|geo|gma|groups|(?:\w\w|asia|espanol|es-us|fr-ca|moderators)\.groups|health|help|(?:\w\w|secure)\.help|homes|(?:tw|tw\.v2)\.house|info|\w\w\.info|tw\.tool\.ks|au\.launch|legalredirect|(?:\w\w)\.lifestyle|(?:gh\.bouncer\.)?login|us\.l?rd|local|\w\w\.local|m|r\.m|\w\w\.m|mail|(?:\w\w\.overview|[\w-]+(?:\.c\.yom)?)\.mail|maktoob|malaysia|tw\.(?:user\.)?mall|maps|(?:\w\w|espanol|sgws2)\.maps|messenger|(?:\w\w|malaysia)\.messenger|\w\w\.meteo|mlogin|mobile|(?:\w\w|espanol|malaysia)\.mobile|tw\.(?:campaign\.)?money|tw\.movie|movies|(?:au|ca|nz|au\.rss|nz\.rss|tw|uk)\.movies|[\w.-]+\.msg|(?:\w\w|es-us)\.mujer|music|ca\.music|[\w-]+\.musica|my|us\.my|de\.nachrichten|ucs\.netsvs|news|(?:au|ca|fr|gr|hk|in|nz|ph|nz\.rss|sg|tw|uk)\.news|cookiex\.ngd|(?:\w\w|es-us)\.noticias|omg|(?:\w\w|es-us)\.omg|au\.oztips|rtb\.pclick|pilotx1|pipes|play|playerio|privacy|profile|tw\.promo|(?:au|hk|nz)\.promotions|publishing|(?:analytics|mailapps|media|ucs|us-locdrop|video)\.query|hk\.rd|(?:\w\w\.|fr-ca\.)?safely|screen|(?:\w\w|es-us)\.screen|scribe|search|(?:\w\w|w\w\.blog|\w\w\.dictionary|finance|\w\w\.finance|images|\w\w\.images|\w\w\.knowledge|\w\w\.lifestyle|\w\w\.local|malaysia|movies|\w\w\.movies|news|\w\w\.news|malaysia\.news|r|recipes|\w\w\.recipes|shine|shopping|\w\w\.shopping|sports|\w\w\.sports|tools|au\.tv|video|\w\w\.video|malaysia\.video)\.search|sec|rtb\.pclick\.secure|security|tw\.security|\w\w\.seguranca|\w\w\.seguridad|es-us\.seguridad|\w\w\.seguro|tw\.serviceplus|settings|shine|ca\.shine|shopping|ca\.shopping|\w+\.sitios|dashboard\.slingstone|(?:au\.|order\.)?smallbusiness|smarttv|rd\.software|de\.spiele|sports|(?:au|ca|fr|hk|nz|ph|profiles|au\.rss|nz\.rss|tw)\.sports|tw\.stock|au\.thehype|\w\w\.tiempo|es\.todo|toolbar|(?:\w\w|data|malaysia)\.toolbar|(?:au|nz)\.totaltravel|transparency|travel|tw\.travel||tv|(?:ar|au|de|fr|es|es-us|it|mx|nz|au\.rss|uk)\.tv|tw\.uwant|(?:mh|nz|qos|yep)\.video|weather|(?:au|ca|hk|in|nz|sg|ph|uk|us)\.weather|de\.wetter|www|au\.yel|video\.media\.yql|dmros\.ysm)\.)?yahoo\.com/"
to="https://$1yahoo.com/" />
<rule from="^http://([\w-]+)\.yahoofs\.com/"

View File

@ -11,11 +11,7 @@ default_on = False
def on_result(request, search, result):
q = search.search_query.query
# WARN: shlex.quote is designed only for Unix shells and may be vulnerable
# to command injection on non-POSIX compliant shells (Windows)
# https://docs.python.org/3/library/shlex.html#shlex.quote
squote = shlex.quote(q)
qs = shlex.split(squote)
qs = shlex.split(q)
spitems = [x.lower() for x in qs if ' ' in x]
mitems = [x.lower() for x in qs if x.startswith('-')]
siteitems = [x.lower() for x in qs if x.startswith('site:')]

View File

@ -97,7 +97,7 @@ class SessionSinglePool(requests.Session):
self.mount('http://', http_adapter)
def close(self):
"""Call super, but clear adapters since there are managed globally"""
"""Call super, but clear adapters since there are managed globaly"""
self.adapters.clear()
super().close()

View File

@ -62,7 +62,7 @@ class Setting:
return self.value
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP response object
"""Save cookie ``name`` in the HTTP reponse obect
If needed, its overwritten in the inheritance."""
resp.set_cookie(name, self.value, max_age=COOKIE_MAX_AGE)
@ -125,7 +125,7 @@ class MultipleChoiceSetting(EnumStringSetting):
self.value.append(choice)
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP response object
"""Save cookie ``name`` in the HTTP reponse obect
"""
resp.set_cookie(name, ','.join(self.value), max_age=COOKIE_MAX_AGE)
@ -160,7 +160,7 @@ class SetSetting(Setting):
self.values = set(elements) # pylint: disable=attribute-defined-outside-init
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP response object
"""Save cookie ``name`` in the HTTP reponse obect
"""
resp.set_cookie(name, ','.join(self.values), max_age=COOKIE_MAX_AGE)
@ -209,7 +209,7 @@ class MapSetting(Setting):
self.key = data # pylint: disable=attribute-defined-outside-init
def save(self, name, resp):
"""Save cookie ``name`` in the HTTP response object
"""Save cookie ``name`` in the HTTP reponse obect
"""
if hasattr(self, 'key'):
resp.set_cookie(name, self.key, max_age=COOKIE_MAX_AGE)
@ -253,7 +253,7 @@ class SwitchableSetting(Setting):
self.enabled.add(choice['id'])
def save(self, resp): # pylint: disable=arguments-differ
"""Save cookie in the HTTP response object
"""Save cookie in the HTTP reponse obect
"""
resp.set_cookie('disabled_{0}'.format(self.value), ','.join(self.disabled), max_age=COOKIE_MAX_AGE)
resp.set_cookie('enabled_{0}'.format(self.value), ','.join(self.enabled), max_age=COOKIE_MAX_AGE)
@ -517,7 +517,7 @@ class Preferences:
return ret_val
def save(self, resp):
"""Save cookie in the HTTP response object
"""Save cookie in the HTTP reponse obect
"""
for user_setting_name, user_setting in self.key_value_settings.items():
if user_setting.locked:

View File

@ -197,10 +197,10 @@ class BangParser(QueryPartParser):
self.raw_text_query.enginerefs.append(EngineRef(value, 'none'))
return True
# check if prefix is equal with category name
# check if prefix is equal with categorie name
if value in categories:
# using all engines for that search, which
# are declared under that category name
# are declared under that categorie name
self.raw_text_query.enginerefs.extend(EngineRef(engine.name, value)
for engine in categories[value]
if (engine.name, value) not in self.raw_text_query.disabled_engines)
@ -216,7 +216,7 @@ class BangParser(QueryPartParser):
self._add_autocomplete(first_char + suggestion)
return
# check if query starts with category name
# check if query starts with categorie name
for category in categories:
if category.startswith(value):
self._add_autocomplete(first_char + category)
@ -309,7 +309,7 @@ class RawTextQuery:
def getFullQuery(self):
"""
get full query including whitespaces
get full querry including whitespaces
"""
return '{0} {1}'.format(' '.join(self.query_parts), self.getQuery()).strip()

View File

@ -143,9 +143,9 @@ def result_score(result, language):
if language in domain_parts:
weight *= 1.1
occurrences = len(result['positions'])
occurences = len(result['positions'])
return sum((occurrences * weight) / position for position in result['positions'])
return sum((occurences * weight) / position for position in result['positions'])
class ResultContainer:
@ -252,7 +252,7 @@ class ResultContainer:
result['engines'] = set([result['engine']])
# strip multiple spaces and carriage returns from content
# strip multiple spaces and cariage returns from content
if result.get('content'):
result['content'] = WHITESPACE_REGEX.sub(' ', result['content'])
@ -278,7 +278,7 @@ class ResultContainer:
return merged_result
else:
# it's an image
# it's a duplicate if the parsed_url, template and img_src are different
# it's a duplicate if the parsed_url, template and img_src are differents
if result.get('img_src', '') == merged_result.get('img_src', ''):
return merged_result
return None

View File

@ -60,7 +60,7 @@ def run(engine_name_list, verbose):
stderr.write(f'{BOLD_SEQ}Engine {name:30}{RESET_SEQ}Checking\n')
checker = searx.search.checker.Checker(processor)
checker.run()
if checker.test_results.successful:
if checker.test_results.succesfull:
stdout.write(f'{BOLD_SEQ}Engine {name:30}{RESET_SEQ}{GREEN}OK{RESET_SEQ}\n')
if verbose:
stdout.write(f' {"found languages":15}: {" ".join(sorted(list(checker.test_results.languages)))}\n')

View File

@ -59,7 +59,7 @@ def run():
logger.debug('Checking %s engine', name)
checker = Checker(processor)
checker.run()
if checker.test_results.successful:
if checker.test_results.succesfull:
result['engines'][name] = {'success': True}
else:
result['engines'][name] = {'success': False, 'errors': checker.test_results.errors}

View File

@ -146,7 +146,7 @@ class TestResults:
self.languages.add(language)
@property
def successful(self):
def succesfull(self):
return len(self.errors) == 0
def __iter__(self):
@ -291,7 +291,7 @@ class ResultContainerTests:
self._record_error('No result')
def one_title_contains(self, title: str):
"""Check one of the title contains `title` (case insensitive comparison)"""
"""Check one of the title contains `title` (case insensitive comparaison)"""
title = title.lower()
for result in self.result_container.get_ordered_results():
if title in result['title'].lower():

View File

@ -56,7 +56,7 @@ class OnlineProcessor(EngineProcessor):
def _send_http_request(self, params):
# create dictionary which contain all
# information about the request
# informations about the request
request_args = dict(
headers=params['headers'],
cookies=params['cookies'],

View File

@ -19,7 +19,7 @@ search:
default_lang : "" # Default search language - leave blank to detect from browser information or use codes from 'languages.py'
ban_time_on_fail : 5 # ban time in seconds after engine errors
max_ban_time_on_fail : 120 # max ban time in seconds after engine errors
prefer_configured_language: False # increase weight of results in configured language in ranking
prefer_configured_language: False # increase weight of results in confiugred language in ranking
server:
port : 8888
@ -73,7 +73,7 @@ ui:
outgoing: # communication with search engines
request_timeout : 2.0 # default timeout in seconds, can be override by engine
# max_request_timeout: 10.0 # the maximum timeout in seconds
useragent_suffix : "" # suffix of searx_useragent, could contain information like an email address to the administrator
useragent_suffix : "" # suffix of searx_useragent, could contain informations like an email address to the administrator
pool_connections : 100 # Number of different hosts
pool_maxsize : 10 # Number of simultaneous requests by host
# uncomment below section if you want to use a proxy
@ -740,13 +740,6 @@ engines:
shortcut: iv
timeout : 5.0
disabled : True
- name: ipfs search
engine: ipfs_search
shortcut: ipfs
paging: True
timeout: 5.0
disabled: True
- name: kickass
engine : kickass
@ -1291,7 +1284,7 @@ engines:
- name : wiby
engine : json_engine
paging : True
search_url : https://wiby.me/json/?q={query}&p={pageno}
search_url : https://wiby.me/json/?q={query}&o={pageno}0
url_query : URL
title_query : Title
content_query : Snippet
@ -1661,7 +1654,7 @@ engines:
paging: true
first_page_num: 0
search_url: https://search.brave.com/search?q={query}&offset={pageno}&spellcheck=1
url_xpath: //a[@class="result-header"]/@href
url_xpath: //div[@class="snippet fdb"]/a/@href
title_xpath: //span[@class="snippet-title"]
content_xpath: //p[1][@class="snippet-description"]
suggestion_xpath: //div[@class="text-gray h6"]/a
@ -1684,15 +1677,6 @@ engines:
require_api_key: false
results: HTML
# omnom engine - see https://github.com/asciimoo/omnom for more details
# - name : omnom
# engine : omnom
# paging : True
# base_url : 'http://your.omnom.host/'
# enable_http : True
# categories : general
# shortcut : om
# Doku engine lets you access to any Doku wiki instance:
# A public one or a private/corporate one.
# - name : ubuntuwiki

View File

@ -37,7 +37,7 @@ def get_user_settings_path():
# find location of settings.yml
if 'SEARX_SETTINGS_PATH' in environ:
# if possible set path to settings using the
# environment variable SEARX_SETTINGS_PATH
# enviroment variable SEARX_SETTINGS_PATH
return check_settings_yml(environ['SEARX_SETTINGS_PATH'])
# if not, get it from /etc/searx, or last resort the codebase
@ -132,7 +132,7 @@ def load_settings(load_user_setttings=True):
default_settings = load_yaml(default_settings_path)
update_settings(default_settings, user_settings)
return (default_settings,
'merge the default settings ( {} ) and the user settings ( {} )'
'merge the default settings ( {} ) and the user setttings ( {} )'
.format(default_settings_path, user_settings_path))
# the user settings, fully replace the default configuration

View File

@ -96,7 +96,7 @@
{% if 'method' not in locked_preferences %}
{% set method_label = _('Method') %}
{% set method_info = _('Change how forms are submitted, <a href="http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods" rel="external">learn more about request methods</a>') %}
{% set method_info = _('Change how forms are submited, <a href="http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods" rel="external">learn more about request methods</a>') %}
{{ preferences_item_header(method_info, method_label, rtl, 'method') }}
<select class="form-control {{ custom_select_class(rtl) }}" name="method" id="method">
<option value="POST" {% if method == 'POST' %}selected="selected"{% endif %}>POST</option>

View File

@ -114,6 +114,6 @@ if __name__ == '__main__':
run_robot_tests([getattr(robot, x) for x in dir(robot) if x.startswith('test_')])
except Exception: # pylint: disable=broad-except
errors = True
print('Error occurred: {0}'.format(traceback.format_exc()))
print('Error occured: {0}'.format(traceback.format_exc()))
test_layer.tearDown()
sys.exit(1 if errors else 0)

View File

@ -10,7 +10,6 @@
# Gabriel Nunes <gabriel.hkr@gmail.com>, 2017
# Guimarães Mello <matheus.mello@disroot.org>, 2017
# Neton Brício <fervelinux@gmail.com>, 2015
# Noémi Ványi <sitbackandwait@gmail.com>, 2022
# pizzaiolo, 2016
# shizuka, 2018
msgid ""
@ -19,7 +18,7 @@ msgstr ""
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2020-07-09 15:07+0200\n"
"PO-Revision-Date: 2014-01-30 14:32+0000\n"
"Last-Translator: Noémi Ványi <sitbackandwait@gmail.com>, 2022\n"
"Last-Translator: André Marcelo Alvarenga <alvarenga@kde.org>, 2022\n"
"Language-Team: Portuguese (Brazil) (http://www.transifex.com/asciimoo/searx/language/pt_BR/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@ -82,7 +81,7 @@ msgstr "erro de busca"
#: searx/webapp.py:634
msgid "{minutes} minute(s) ago"
msgstr "{minutes} minuto(s) atrás"
msgstr "{minutos} minuto(s) atrás"
#: searx/webapp.py:636
msgid "{hours} hour(s), {minutes} minute(s) ago"

View File

@ -53,7 +53,7 @@ def parse_lang(preferences: Preferences, form: Dict[str, str], raw_text_query: R
return preferences.get_value('language')
# get language
# set specific language if set on request, query or preferences
# TODO support search with multiple languages
# TODO support search with multible languages
if len(raw_text_query.languages):
query_lang = raw_text_query.languages[-1]
elif 'language' in form:
@ -216,7 +216,7 @@ def get_search_query_from_webapp(preferences: Preferences, form: Dict[str, str])
disabled_engines = preferences.engines.get_disabled()
# parse query, if tags are set, which change
# the search engine or search-language
# the serch engine or search-language
raw_text_query = RawTextQuery(form['q'], disabled_engines)
# set query
@ -231,7 +231,7 @@ def get_search_query_from_webapp(preferences: Preferences, form: Dict[str, str])
if not is_locked('categories') and raw_text_query.enginerefs and raw_text_query.specific:
# if engines are calculated from query,
# set categories by using that information
# set categories by using that informations
query_engineref_list = raw_text_query.enginerefs
else:
# otherwise, using defined categories to

View File

@ -225,7 +225,7 @@ def code_highlighter(codelines, language=None):
language = 'text'
try:
# find lexer by programming language
# find lexer by programing language
lexer = get_lexer_by_name(language, stripall=True)
except:
# if lexer is not found, using default one
@ -647,7 +647,7 @@ def search():
# removing html content and whitespace duplications
result['title'] = ' '.join(html_to_text(result['title']).strip().split())
if 'url' in result and 'pretty_url' not in result:
if 'url' in result:
result['pretty_url'] = prettify_url(result['url'])
# TODO, check if timezone is calculated right

View File

@ -35,7 +35,7 @@ class UnicodeWriter:
# Fetch UTF-8 output from the queue ...
data = self.queue.getvalue()
data = data.strip('\x00')
# ... and re-encode it into the target encoding
# ... and reencode it into the target encoding
data = self.encoder.encode(data)
# write to the target stream
self.stream.write(data.decode())

View File

@ -13,7 +13,7 @@ from searx.engines.wikidata import send_wikidata_query
# ORDER BY (with all the query fields) is important to keep a deterministic result order
# so multiple invocation of this script doesn't change currencies.json
# so multiple invokation of this script doesn't change currencies.json
SARQL_REQUEST = """
SELECT DISTINCT ?iso4217 ?unit ?unicode ?label ?alias WHERE {
?item wdt:P498 ?iso4217; rdfs:label ?label.
@ -29,7 +29,7 @@ ORDER BY ?iso4217 ?unit ?unicode ?label ?alias
"""
# ORDER BY (with all the query fields) is important to keep a deterministic result order
# so multiple invocation of this script doesn't change currencies.json
# so multiple invokation of this script doesn't change currencies.json
SPARQL_WIKIPEDIA_NAMES_REQUEST = """
SELECT DISTINCT ?iso4217 ?article_name WHERE {
?item wdt:P498 ?iso4217 .

View File

@ -30,7 +30,7 @@ HTTP_COLON = 'http:'
def get_bang_url():
response = requests.get(URL_BV1, timeout=10.0)
response = requests.get(URL_BV1)
response.raise_for_status()
r = RE_BANG_VERSION.findall(response.text)
@ -38,7 +38,7 @@ def get_bang_url():
def fetch_ddg_bangs(url):
response = requests.get(url, timeout=10.0)
response = requests.get(url)
response.raise_for_status()
return json.loads(response.content.decode())

View File

@ -5,7 +5,7 @@ import requests
import re
from os.path import dirname, join
from urllib.parse import urlparse, urljoin
from packaging.version import Version, parse
from distutils.version import LooseVersion, StrictVersion
from lxml import html
from searx import searx_dir
@ -39,7 +39,7 @@ def fetch_firefox_versions():
if path.startswith(RELEASE_PATH):
version = path[len(RELEASE_PATH):-1]
if NORMAL_REGEX.match(version):
versions.append(Version(version))
versions.append(LooseVersion(version))
list.sort(versions, reverse=True)
return versions
@ -49,12 +49,12 @@ def fetch_firefox_last_versions():
versions = fetch_firefox_versions()
result = []
major_last = versions[0].major
major_last = versions[0].version[0]
major_list = (major_last, major_last - 1)
for version in versions:
major_current = version.major
major_current = version.version[0]
if major_current in major_list:
result.append(str(version))
result.append(version.vstring)
return result

View File

@ -18,7 +18,7 @@ engines_languages_file = Path(searx_dir) / 'data' / 'engines_languages.json'
languages_file = Path(searx_dir) / 'languages.py'
# Fetches supported languages for each engine and writes json file with those.
# Fetchs supported languages for each engine and writes json file with those.
def fetch_supported_languages():
engines_languages = dict()

View File

@ -451,17 +451,17 @@ install_template() {
fi
if [[ -f "${dst}" ]] && cmp --silent "${template_file}" "${dst}" ; then
info_msg "file ${dst} already installed"
info_msg "file ${dst} allready installed"
return 0
fi
info_msg "different file ${dst} already exists on this host"
info_msg "diffrent file ${dst} allready exists on this host"
while true; do
choose_one _reply "choose next step with file $dst" \
"replace file" \
"leave file unchanged" \
"interactive shell" \
"interactiv shell" \
"diff files"
case $_reply in
@ -474,7 +474,7 @@ install_template() {
"leave file unchanged")
break
;;
"interactive shell")
"interactiv shell")
echo -e "// edit ${_Red}${dst}${_creset} to your needs"
echo -e "// exit with [${_BCyan}CTRL-D${_creset}]"
sudo -H -u "${owner}" -i
@ -1018,8 +1018,8 @@ nginx_install_app() {
nginx_include_apps_enabled() {
# Add the *NGINX_APPS_ENABLED* infrastructure to a nginx server block. Such
# infrastructure is already known from fedora and centos, including apps (location
# Add the *NGINX_APPS_ENABLED* infrastruture to a nginx server block. Such
# infrastruture is already known from fedora and centos, including apps (location
# directives) from the /etc/nginx/default.d folder into the *default* nginx
# server.
@ -1521,7 +1521,7 @@ _apt_pkg_info_is_updated=0
pkg_install() {
# usage: TITLE='install foobar' pkg_install foopkg barpkg
# usage: TITEL='install foobar' pkg_install foopkg barpkg
rst_title "${TITLE:-installation of packages}" section
echo -e "\npackage(s)::\n"
@ -1557,7 +1557,7 @@ pkg_install() {
pkg_remove() {
# usage: TITLE='remove foobar' pkg_remove foopkg barpkg
# usage: TITEL='remove foobar' pkg_remove foopkg barpkg
rst_title "${TITLE:-remove packages}" section
echo -e "\npackage(s)::\n"
@ -1623,7 +1623,7 @@ git_clone() {
# git_clone <url> <path> [<branch> [<user>]]
#
# First form uses $CACHE/<name> as destination folder, second form clones
# into <path>. If repository is already cloned, pull from <branch> and
# into <path>. If repository is allready cloned, pull from <branch> and
# update working tree (if needed, the caller has to stash local changes).
#
# git clone https://github.com/searx/searx searx-src origin/master searxlogin
@ -1696,7 +1696,7 @@ lxc_init_container_env() {
# usage: lxc_init_container_env <name>
# Create a /.lxcenv file in the root folder. Call this once after the
# container is initial started and before installing any boilerplate stuff.
# container is inital started and before installing any boilerplate stuff.
info_msg "create /.lxcenv in container $1"
cat <<EOF | lxc exec "${1}" -- bash | prefix_stdout "[${_BBlue}${1}${_creset}] "

View File

@ -107,7 +107,7 @@ show
:suite: show services of all (or <name>) containers from the LXC suite
:images: show information of local images
cmd
use single quotes to evaluate in container's bash, e.g.: 'echo \$(hostname)'
use single qoutes to evaluate in container's bash, e.g.: 'echo \$(hostname)'
-- run command '...' in all containers of the LXC suite
:<name>: run command '...' in container <name>
install
@ -178,7 +178,7 @@ main() {
lxc_delete_container "$2"
fi
;;
*) usage "unknown or missing container <name> $2"; exit 42;;
*) usage "uknown or missing container <name> $2"; exit 42;;
esac
;;
start|stop)
@ -190,7 +190,7 @@ main() {
info_msg "lxc $1 $2"
lxc "$1" "$2" | prefix_stdout "[${_BBlue}${i}${_creset}] "
;;
*) usage "unknown or missing container <name> $2"; exit 42;;
*) usage "uknown or missing container <name> $2"; exit 42;;
esac
;;
show)

View File

@ -402,7 +402,7 @@ EOF
}
enable_debug() {
warn_msg "Do not enable debug in production environments!!"
warn_msg "Do not enable debug in production enviroments!!"
info_msg "Enabling debug option needs to reinstall systemd service!"
set_service_env_debug true
}

View File

@ -436,7 +436,7 @@ install_settings() {
choose_one action "What should happen to the settings file? " \
"keep configuration unchanged" \
"use origin settings" \
"start interactive shell"
"start interactiv shell"
case $action in
"keep configuration unchanged")
info_msg "leave settings file unchanged"
@ -446,7 +446,7 @@ install_settings() {
info_msg "install origin settings"
cp "${SEARX_SETTINGS_TEMPLATE}" "${SEARX_SETTINGS_PATH}"
;;
"start interactive shell")
"start interactiv shell")
backup_file "${SEARX_SETTINGS_PATH}"
echo -e "// exit with [${_BCyan}CTRL-D${_creset}]"
sudo -H -i
@ -533,7 +533,7 @@ EOF
}
test_local_searx() {
rst_title "Testing searx instance locally" section
rst_title "Testing searx instance localy" section
echo
if service_is_available "http://${SEARX_INTERNAL_HTTP}" &>/dev/null; then
@ -600,7 +600,7 @@ EOF
}
enable_debug() {
warn_msg "Do not enable debug in production environments!!"
warn_msg "Do not enable debug in production enviroments!!"
info_msg "try to enable debug mode ..."
tee_stderr 0.1 <<EOF | sudo -H -i 2>&1 | prefix_stdout "$_service_prefix"
cd ${SEARX_SRC}
@ -833,8 +833,8 @@ rst-doc() {
eval "echo \"$(< "${REPO_ROOT}/docs/build-templates/searx.rst")\""
# I use ubuntu-20.04 here to demonstrate that versions are also supported,
# normally debian-* and ubuntu-* are most the same.
# I use ubuntu-20.04 here to demonstrate that versions are also suported,
# normaly debian-* and ubuntu-* are most the same.
for DIST_NAME in ubuntu-20.04 arch fedora; do
(

View File

@ -27,7 +27,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpreter config
# Plugin to use and interpretor config
single-interpreter = true
# enable master process

View File

@ -27,7 +27,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpreter config
# Plugin to use and interpretor config
single-interpreter = true
# enable master process

View File

@ -26,7 +26,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpreter config
# Plugin to use and interpretor config
single-interpreter = true
# enable master process

View File

@ -26,7 +26,7 @@ disable-logging = true
# The right granted on the created socket
chmod-socket = 666
# Plugin to use and interpreter config
# Plugin to use and interpretor config
single-interpreter = true
# enable master process