Sphinx in Action: How Sphinx handles text during indexing

This is a post from a series of posts called “Sphinx in Action”. See the whole series by clicking here.

When integrating full-text search into your application it’s important to think about tokenizing your text, or how it gets split up into words. This is the first process your data goes through once you’ve built the config and begun indexing.

Sphinx includes a variety of settings that give you full control of the way your text gets tokenized. They are:

  • charset_table and ngram_chars; lets you define characters that will be treated as normal characters, everything else will be a separator.
    You can also use:

    • Ranges: a..z
    • Char mapping: A->a
    • Range mapping: A..Z->a..z
    • Char codes: U+410..U+42F
  • ngram_chars; If you have to deal with chinese/japanese/korean text whose structure differs significantly from other languages, this can be useful as it allows you to insert a separator after each hieroglyph and instead of one long word you will have each hieroglyph indexed as a separate token, the same will happen with the query allowing your users to find what they’re looking for.
    Example:
    ngram_chars = U+3000..U+2FA1F
  • ignore_chars; lets you totally ignore some characters.
  • blend_chars; allows you make some characters both separators and normal characters as well. For example, if you have twitter-related things like @username in your data and want to allow users to search exactly for the twitter nicks you can easily do so by adding ‘@’ into the blend_chars, then the query ‘@username’ wouldn’t find text ‘username’ while the query ‘username’ would still find them both.
    Example:
    blend_chars = @

Exceptions, wordforms and stopwords

That concludes the character level commands. The next process the text goes through while getting indexed is the word handling level controlled by stopwords, word length directives, exceptions and wordforms.

  • By using stopwords, you can make one or more lists of words that will not be indexed at all. They may be very frequently used words such as ‘a’, ‘the’, ‘and’, etc… There are a few goals for this: Firstly, it improves the search quality because once these words are excluded from the index and from the query, the latter will become more agile. For example if you have ‘on’, ‘a’, ‘the’ and ‘my’ in the stopwords, the query ‘search on my site’ will also find such phrases as ‘search on a site’, ‘search on the site’, ‘search my site’ etc. Secondly, it decreases the size of your index because indexing something which exists in almost all documents will take more resources and thus slow down performance.
  • The related directive used to configure stopword behavior is stopword_step. If it is set to one (which is a default value) the query “search on my site” will match with any of the following queries: “search on a site”, “search on the site”, or in general “search + ANY_OTHER_STOPWORD + ANY_OTHER_STOPWORD + site”. However, if it is set to zero it will also match with “search my site” or “search site”. In other words, all stopwords will just be ignored.
  • min_word_len; This specifies the minimal word length to index. You need to be careful to correctly use the related directive overshort_step. If it’s set to 1 and your min_word_len = 2 the query “search on site” will not match “search on a site”, however if it’s set to 0 it will match that phrase.
  • prefix/infix directives help you to filter out things that shouldn’t be indexed and on the contrary index substrings that should be.
    Not only is it important to index whole words, but sometimes it makes sense to be able to search by close variant words, e.g. if you type in “dogs” you might also want to find “dog” or you want “hero” to find “superhero”. This is enabled by a variety of directives: min_prefix_len, min_infix_len, prefix_fields, prefix_fields, infix_fields, enable_star, expand_keywords. Using these directives, you can configure exactly how your words should be split into substrings. You can specify how many characters can be trimmed on the end or on the both ends of the word, but also what full-text field this should be applied to, whether to support wild-card syntax (e.g. dog*) or treat all query words as substrings. Be aware that using the prefixes and the infixes increases your index size and might affect the performance.
  • Exceptions and wordforms
    Another thing Sphinx allows you to do is define list of words that should be mapped to each other which means you can tell Sphinx to treat USA, U.S.A, US, U.S, America, United Stated, United States of America as one and the same word, for example. To do this you should use the ‘exceptions’ directive. It works on very low level before even tokenizing the text.  Using this, you can map all the words related to a specific one, for example USA and the same will happen with the search query. This might dramatically increase your search quality especially if you have to deal with products that can have different names and abbreviations, but all mean the same (PlayStation, Play Station, PS, Sony PlayStation etc.). Another reason to use the exceptions is to index something which has a stopword or very short word which would otherwise be truncated. As an example, “The Matrix” would be converted to “Matrix” or “vitamin a” which would become “vitamin” when min_word_len = 2 or greater.
    *Note: Since exceptions work before tokenizing they have to be case sensitive (at least this is how it works now).

    Example:
    U.S.A. => USA
    U.S. => USA
    US => USA
    us => USA

    Sometimes it makes sense to map word to itself in the exceptions:
    AT&T => AT&T

    This allows to let the user search for ‘AT&T’ and find excactly ‘AT&T’, not separate words ‘AT’ and ‘T’ which would be return by the tokenizer if ‘&’ is a separator.

    The ‘wordforms’ directive is similar to the exceptions, with one difference. It’s applied after tokenizing so they’re case insensitive which is good, but you cannot use it to handle cases like ‘AT&T’ which is bad. However, the wordforms work much faster as they were designed to work with millions of different word forms and this can be especially useful when done along with stemming.

  • Stemming
    Using stemming you can improve your search quality even more. For instance if you enable English stemmer ‘walking’, ‘walks’, ‘walked’ will be all converted to ‘walk’. The same will happen with the query and you will be able to find ‘he was walking on the street’ by searching for ‘walked’.
    Sphinx enables English, Russian and Czech stemming out of the box and it was built with ––with_libstemmer it supports anything else using the Snowball libstemmer library.

    These morphology processors are not perfect, but as I said the wordforms, they can be especially useful when combined with stemming. Because once the word is found in the wordforms it wont get processed later by the stemmer. Therefore, you can override something which doesn’t work perfectly. For example, ‘does’ gets converted to ‘doe’ by the English stemmer, but you can override this using the wordforms like this:

    does > do
    Likewise, it will also match ‘do’ when ‘does’ is typed in.

  • html stripping
    Another nice feature is html strippping. Often Sphinx is used to search among web pages which are HTML documents and using the ‘html_strip’ directive you can do the html parsing job.
    This is important when you still need the markup in your datasource and don’t want to store both raw and stripped versions of the text.

To figure out exactly how your current index settings work, you can use the ‘call keywords()’ function in SphinxQL:

mysql> call keywords('abc a b c the AT&T A&BULL', 'idx');
+-----------+------------+
| tokenized | normalized |
+-----------+------------+
| abc       | abc        |
| b         | b          |
| c         | c          |
| AT&T      | AT&T       |
| bull      | bull       |
+-----------+------------+
5 rows in set (0.00 sec)

You can see that the text ‘abc a b c the AT&T A&BULL’ was split into words ‘abc’, ‘b’, ‘c’, ‘AT&T’ and ‘bull’. ‘a’ and ‘the’ were skipped, because they’re stopwords, AT&T is an exception and that’s why it was not split into ‘AT’ and ‘T’ as happened with ‘A&BULL’ which was split into ‘A’ and ‘BULL’ and then ‘A’ was not indexed, because it’s a stopword.
In the SphinxAPI this function is called ‘BuildKeywords()’.

2 Comments

KaiNovember 3rd, 2013 at 1:07 pm

Good article but I think it’s still impossible to get correct results when need to search USA or U.S.A. or U S A without manual creation of exceptions or wordforms. Let’s try:

1. Default settings:
indexed: U S A and U.S.A
needed: USA

2. Added dot (‘.’) to charset_table:
indexed: U.S.A.
needed: USA

3. Added dot to ignore_chars:
indexed: USA
needed: U.S.A and U S A
removed dots before search in application so after needed only: U S A

4. Added dot to blend_chars (different blend modes):
indexed: U S A and U.S.A
needed: USA

Am I missing something?

amonakhovNovember 8th, 2013 at 1:29 pm

Hello, Kai.

First of all there is solution based on exceptions. It’s obviouse of course and mostly used.

But if you are looking for solution without manual mapping then you could have something like this one:

sphinx.conf:

ignore_chars = .
exceptinos = exceptions.txt

exceptinos.txt:

U S A => USA

This will provide the following results:

> call keywords(‘U.S.A. USA U S A’, ‘Test’)

tokenized, normalized
usa, usa
usa, usa
usa, usa

So basicaly this is a combination of exceptions and ignore_chars options but there is only one exception which includes spaces (not dots).

Thanks!

Leave a comment

Your comment

Notify me of followup comments via e-mail. You can also subscribe without commenting.