Go to file
Translation updater bot 55655a6220 Localisation updates from https://translatewiki.net.
Change-Id: I2325a70d9366d7d360b1744736252ecac357cb4e
2024-05-06 09:25:12 +02:00
.phan Use RuntimeException/LogicException instead of Exception 2024-02-26 17:33:28 +00:00
data Introduce a new way to identify namespaces 2019-01-25 15:36:16 +01:00
docs Group updates and decide clusters based on groups 2024-03-07 09:03:31 +00:00
i18n Localisation updates from https://translatewiki.net. 2024-05-06 09:25:12 +02:00
includes Normalize Arabic variants of kaf, yeh, heh 2024-05-01 17:41:30 +00:00
maintenance Migrate to IReadableDatabase::newSelectQueryBuilder 2024-04-29 23:52:03 +02:00
profiles Add DocumentSizeLimiter a component to limit cirrus doc sizes 2022-10-25 21:52:12 +00:00
resources build: Updating eslint-config-wikimedia to 0.25.0 2023-05-04 01:18:22 +00:00
scripts Show summary/stats after completing comparison 2024-01-11 16:31:18 +00:00
tests tests: Mock empty SelectQueryBuilder in DataSenderTest 2024-05-04 21:55:54 +02:00
.eslintrc.json build: Move eslint exclude folders from Gruntfile.js into .eslintrc.json 2023-08-10 23:27:04 +00:00
.gitignore Unpack Turkish Analyzer, enable better_apostrophe 2023-03-27 14:59:59 +00:00
.gitreview Remove defaultbranch from .gitreview 2022-01-24 09:31:47 +00:00
.phpcs.xml build: Fix line indents 2024-03-10 23:18:14 +01:00
.stylelintrc.json build: Updating npm dependencies 2023-03-16 13:40:18 +00:00
CODE_OF_CONDUCT.md build: Updating mediawiki/phan-taint-check-plugin to 1.4.0 2018-09-01 06:26:00 +00:00
COPYING Add missing newline at end of text files 2017-02-10 17:27:47 +01:00
CREDITS Credit where credit is due 2015-07-10 12:08:56 -07:00
Doxyfile Doxygen: exclude tests 2019-03-12 18:05:16 +00:00
Gruntfile.js build: Use conf.MessageDirs for banana path config 2023-09-18 19:43:15 +01:00
Makefile Move unit tests to standard directory layout 2018-02-08 13:18:15 -08:00
README Update README to reflect elastic 7.10 requirement 2023-10-25 12:54:41 -07:00
UPGRADE Update UPGRADE description 2023-02-27 22:02:57 +00:00
composer.json build: Updating dependencies 2024-05-02 02:35:16 +00:00
extension.json Expose sanity checker over mw api 2024-04-03 08:22:01 +00:00
package-lock.json build: Updating dependencies 2024-05-02 02:35:16 +00:00
package.json build: Updating eslint-config-wikimedia to 0.27.0 2024-04-22 16:56:22 +00:00


MediaWiki extension: CirrusSearch

Get Elasticsearch up and running somewhere. Only Elasticsearch v7.10 is
supported. A compatability layer for writing to Elasticsearch v6.8 is provided
to support zero-downtime migrations when multiple clusters are available. If you
will be running Elasticsearch on a host separate from the mediawiki
installation see
Be careful with the network configuration, never expose an unprotected node to
the internet.

Place the CirrusSearch extension in your extensions directory.
You also need to install the Elastica MediaWiki extension.
Add this to LocalSettings.php:
 wfLoadExtension( 'Elastica' );
 wfLoadExtension( 'CirrusSearch' );
 $wgDisableSearchUpdate = true;

Configure your search servers in LocalSettings.php if you aren't running Elasticsearch on localhost:
 $wgCirrusSearchServers = [ 'elasticsearch0', 'elasticsearch1', 'elasticsearch2', 'elasticsearch3' ];
There are other $wgCirrusSearch variables that you might want to change from their defaults.

Now run this script to generate your elasticsearch index:
 php $MW_INSTALL_PATH/extensions/CirrusSearch/maintenance/UpdateSearchIndexConfig.php

Now remove $wgDisableSearchUpdate = true from LocalSettings.php.  Updates should start heading to Elasticsearch.

Next bootstrap the search index by running:
 php $MW_INSTALL_PATH/extensions/CirrusSearch/maintenance/ForceSearchIndex.php --skipLinks --indexOnSkip
 php $MW_INSTALL_PATH/extensions/CirrusSearch/maintenance/ForceSearchIndex.php --skipParse
Note that this can take some time.  For large wikis read "Bootstrapping large wikis" below.

Once that is complete add this to LocalSettings.php to funnel queries to ElasticSearch:
 $wgSearchType = 'CirrusSearch';

Bootstrapping large wikis
Since most of the load involved in indexing is parsing the pages in php we provide a few options to split the
process into multiple processes.  Don't worry too much about the database during this process.  It can generally
handle more indexing processes then you are likely to be able to spawn.

General strategy:
0.  Make sure you have a good job queue setup.  It'll be doing most of the work.  In fact, Cirrus won't work
well on large wikis without it.
1.  Generate scripts to add all the pages without link counts to the index.
2.  Execute them any way you like.
3.  Generate scripts to count all the links.
4.  Execute them any way you like.

Step 1:
In bash I do this:
 export PROCS=5 #or whatever number you want
 rm -rf cirrus_scripts
 mkdir cirrus_scripts
 mkdir cirrus_log
 pushd cirrus_scripts
 php extensions/CirrusSearch/maintenance/ForceSearchIndex.php --queue --maxJobs 10000 --pauseForJobs 1000 \
    --skipLinks --indexOnSkip --buildChunks 250000 |
    sed -e 's/$/ | tee -a cirrus_log\/'$wiki'.parse.log/' |
    split -n r/$PROCS
 for script in x*; do sort -R $script > $script.sh && rm $script; done

Step 2:
Just run all the scripts that step 1 made.  Best to run them in screen or something and in the directory above
cirrus_scripts.  So like this:
 bash cirrus_scripts/xaa.sh

Step 3:
In bash I do this:
 pushd cirrus_scripts
 rm *.sh
 php extensions/CirrusSearch/maintenance/ForceSearchIndex.php --queue --maxJobs 10000 --pauseForJobs 1000 \
    --skipParse --buildChunks 250000 |
    sed -e 's/$/ | tee -a cirrus_log\/'$wiki'.parse.log/' |
    split -n r/$PROCS
 for script in x*; do sort -R $script > $script.sh && rm $script; done

Step 4:
Same as step 2 but for the new scripts.  These scripts put more load on Elasticsearch so you might want to run
them just one at a time if you don't have a huge Elasticsearch cluster or you want to make sure not to cause load

If you don't have a good job queue you can try the above but lower the buildChunks parameter significantly and
remove the --queue parameter.

Handling elasticsearch outages
If for some reason in process updates to elasticsearch begin failing you can immediately
set "$wgDisableSearchUpdate = true;" in your LocalSettings.php file to
stop trying to update elasticsearch.  Once you figure out what is wrong with elasticsearch you
should turn those updates back on and then run the following:
php ./maintenance/ForceSearchIndex.php --from <whenever the outage started in ISO 8601 format> --deletes
php ./maintenance/ForceSearchIndex.php --from <whenever the outage started in ISO 8601 format>

The first command picks up all the deletes that occurred during the outage and
should complete quite quickly.  The second command picks up all the updates
that occurred during the outage and might take significantly longer.

Changing $wgNamespacesToBeSearchedDefault
When changing wgNamespacesToBeSearchedDefault you might need to reindex some pages from the source documents.
For achieving this you have two options:
- Blow away the search index and rebuild it from scratch. (see the Upgrading section, option A)
- Use the "Saneitizer": php Saneitize.php

 Both options have drawbacks, the first one might incur a downtime making some pages unfindable while
 they are indexed, the second option might take time if the wiki is large.

CirrusSearch can leverage the PoolCounter extension to limit the number of concurrent searches to
elasticsearch.  You can do this by installing the PoolCounter extension and then configuring it in
LocalSettings.php like so:
 wfLoadExtension( 'PoolCounter');
 // Configuration for standard searches.
 $wgPoolCounterConf[ 'CirrusSearch-Search' ] = [
	'class' => 'MediaWiki\PoolCounter\PoolCounterClient',
	'timeout' => 30,
	'workers' => 25,
	'maxqueue' => 50,
 // Configuration for prefix searches.  These are usually quite quick and
 // plentiful.
 $wgPoolCounterConf[ 'CirrusSearch-Prefix' ] = [
	'class' => 'MediaWiki\PoolCounter\PoolCounterClient',
	'timeout' => 10,
	'workers' => 50,
	'maxqueue' => 100,
 // Configuration for regex searches.  These are slow and use lots of resources
 // so we only allow a few at a time.
 $wgPoolCounterConf[ 'CirrusSearch-Regex' ] = [
	'class' => 'MediaWiki\PoolCounter\PoolCounterClient',
	'timeout' => 30,
	'workers' => 10,
	'maxqueue' => 10,
 // Configuration for funky namespace lookups.  These should be reasonably fast
 // and reasonably rare.
 $wgPoolCounterConf[ 'CirrusSearch-NamespaceLookup' ] = [
		'class' => 'MediaWiki\PoolCounter\PoolCounterClient',
		'timeout' => 10,
		'workers' => 20,
		'maxqueue' => 20,

When you upgrade there four possible cases for maintaining the index:
1.  You must update the index configuration and reindex from source documents.
2.  You must update the index configuration and reindex from already indexed documents.
3.  You must update the index configuration but no reindex is required.
4.  No changes are required.

If you must do (1) you have two options:
A.  Blow away the search index and rebuild it from scratch.  Marginally faster and uses less disk space on
in elasticsearch but empties the index entirely and rebuilds it so search will be down for a while:
 php updateSearchIndexConfig.php --startOver
 php forceSearchIndex.php

B.  Build a copy of the index, reindex to it, and then force a full reindex from source documents.  Uses
more disk space but search should be up the entire time:
 php updateSearchIndexConfig.php --reindexAndRemoveOk --indexIdentifier now
 php forceSearchIndex.php

If you must do (2) really have only one option:
A.  Build of a copy of the index and reindex to it:
 php updateSearchIndexConfig.php --reindexAndRemoveOk --indexIdentifier now
 php forceSearchIndex.php --from <time when you started updateSearchIndexConfig.php in YYYY-mm-ddTHH:mm:ssZ> --deletes
 php forceSearchIndex.php --from <time when you started updateSearchIndexConfig.php in YYYY-mm-ddTHH:mm:ssZ>
or for the Bash inclined:
 TZ=UTC export REINDEX_START=$(date +%Y-%m-%dT%H:%m:%SZ)
 php updateSearchIndexConfig.php --reindexAndRemoveOk --indexIdentifier now
 php forceSearchIndex.php --from $REINDEX_START --deletes
 php forceSearchIndex.php --from $REINDEX_START

If you must do (3) you again only have one option:
A.  Same as (2.A)

4 is easy!

The safest thing if you don't know what is required for your update is to execute (1.B).

Production suggestions


All the general rules for making Elasticsearch production ready apply here.  So you don't have to go
round them up below is a list.  Some of these steps are obvious, others will take some research.

** NOTE: this list was written for 0.90 so it may not work well for 1.0.  It'll be revised when I have
more experience with 1.0.  --Nik

1.  Have >= 3 nodes.
2.  Configure Elasticsearch for memlock.
3.  Change each node's elasticsearch.yml file in a few ways.
3a.  Change node name to the real host name.
3b.  Turn off auto creation and some other scary stuff by adding this (tested with 0.90.4):
 ################################### Actions #################################
 ## Modulo some small changes to comments this section comes directly from the
 ## wonderful Elasticsearch mailing list, specifically Dan Everton.
 # Require explicit index creation.  ES never auto creates the indexes the way we
 # like them.
 action.auto_create_index: false

 # Protect against accidental close/delete operations on all indices. You can
 # still close/delete individual indices.
 action.disable_close_all_indices: true
 action.disable_delete_all_indices: true

 # Disable ability to shutdown nodes via REST API.
 action.disable_shutdown: true

See tests

Job Queue
Cirrus makes heavy use of the job queue.  You can run it without any job queue customization but
if you switch the job queue to Redis with checkDelay enabled then Cirrus's results will be more
correct.  The reason for this is that this configuration allows Cirrus to delay link counts
until Elasticsearch has appropriately refreshed.  This is an example of configuring it:
 $redisPassword = '<password goes here>';
 $wgJobTypeConf['default'] = [
	'class' => 'JobQueueRedis',
	'order' => 'fifo',
	'redisServer' => 'localhost',
	'checkDelay' => true,
	'redisConfig' => [
		'password' => $redisPassword,

Note: some MediaWiki setups have trouble running the job queue.  It can be finicky.	 The most
sure fire way to get it to work is also the slowest.  Add this to your LocalSettings.php:
 $wgRunJobsAsync = false;

The fastest way to get started with CirrusSearch development is to use MediaWiki-Vagrant.
1.  Follow steps here: https://www.mediawiki.org/wiki/MediaWiki-Vagrant#Quick_start
2.  Now execute the following:
vagrant enable-role cirrussearch
vagrant provision

This can take some time but it produces a clean development environment in a virtual machine
that has everything required to run Cirrus.

See docs/hooks.txt.

Licensing information
CirrusSearch makes use of the Elastica extension containing the Elastica library to connect
to Elasticsearch <http://elastica.io/>. It is Apache licensed and you can read the license