Two-way string-matching algorithm
Class | String-searching algorithm |
---|---|
Data structure | Any string with an ordered alphabet |
Worst-case performance | O(n) |
Best-case performance | O(n) |
Worst-case space complexity | ⌈log₂ m⌉ |
In computer science, the two-way string-matching algorithm is a string-searching algorithm, discovered by Maxime Crochemore and Dominique Perrin in 1991.[1] It takes a pattern of size m, called a “needle”, preprocesses it in linear time O(m), producing information that can then be used to search for the needle in any “haystack” string, taking only linear time O(n) with n being the haystack's length.
The two-way algorithm can be viewed as a combination of the forward-going Knuth–Morris–Pratt algorithm (KMP) and the backward-running Boyer–Moore string-search algorithm (BM). Like those two, the 2-way algorithm preprocesses the pattern to find partially repeating periods and computes “shifts” based on them, indicating what offset to “jump” to in the haystack when a given character is encountered.
Unlike BM and KMP, it uses only O(log m) additional space to store information about those partial repeats: the search pattern is split into two parts (its critical factorization), represented only by the position of that split. Being a number less than m, it can be represented in ⌈log₂ m⌉ bits. This is sometimes treated as "close enough to O(1) in practice", as the needle's size is limited by the size of addressable memory; the overhead is a number that can be stored in a single register, and treating it as O(1) is like treating the size of a loop counter as O(1) rather than log of the number of iterations. The actual matching operation performs at most 2n − m comparisons.[2]
Breslauer later published two improved variants performing fewer comparisons, at the cost of storing additional data about the preprocessed needle:[3]
- The first one performs at most n + ⌊(n − m)/2⌋ comparisons, ⌈(n − m)/2⌉ fewer than the original. It must however store ⌈log m⌉ additional offsets in the needle, using O(log2 m) space.
- The second adapts it to only store a constant number of such offsets, denoted c, but must perform n + ⌊(1⁄2 + ε) * (n − m)⌋ comparisons, with ε = 1⁄2(Fc+2 − 1)−1 = O(−c) going to zero exponentially quickly as c increases.
The algorithm is considered fairly efficient in practice, being cache-friendly and using several operations that can be implemented in well-optimized subroutines. It is used by the C standard libraries glibc, newlib, and musl, to implement the memmem and strstr family of substring functions.[4][5][6] As with most advanced string-search algorithms, the naïve implementation may be more efficient on small-enough instances;[7] this is especially so if the needle isn't searched in multiple haystacks, which would amortize the preprocessing cost.
Critical factorization
[edit]Before we define critical factorization, we should define:[1]
- Factorization: a string is considered factorized when it is split into two parts. Suppose a string x is split into two parts (u, v), then (u, v) is called a factorization of x.
- Period: a period p for a string x is defined as a value such that for any integer 0 < i ≤ len(x) − p, x[i] = x[i + p]. In other words, "p is a period of x if two letters of x at distance p always coincide". The minimum period of x is a positive integer denoted as p(x).
- A repetition w in (u, v) is a non-empty string such that:
- w is a suffix of u or u is a suffix of w;
- w is a prefix of v or v is a prefix of w;
- In other words, w occurs on both sides of the cut with a possible overflow on either side. Each factorization trivially has at least one repetition, the string vu.[2]
- A local period is the length of a repetition in (u, v). The smallest local period in (u, v) is denoted as r(u, v). For any factorization (u, v) of x, 0 < r(u, v) ≤ len(x).
- A critical factorization is a factorization (u, v) of x such that r(u, v) = p(x). For a needle of length m in an ordered alphabet, it can be computed in 2m comparisons, by computing the lexicographically larger of two ordered maximal suffixes, defined for order ≤ and ≥.[6]
The algorithm
[edit]This section is missing information about the match function.(March 2022) |
The algorithm starts by critical factorization of the needle as the preprocessing step. This step produces the index (starting point) of the periodic right-half, and the period of this stretch. The suffix computation here follows the authors' formulation. It can alternatively be computed using the Duval's algorithm, which is simpler and still linear time but slower in practice.[8]
Shorthand for inversion. function cmp(a, b) if a > b return 1 if a = b return 0 if a < b return -1 function maxsuf(n, rev) l ← len(n) p ← 1 currently known period. k ← 1 index for period testing, 0 < k <= p. j ← 0 index for maxsuf testing. greater than maxs. i ← -1 the proposed starting index of maxsuf while j + k < l cmpv ← cmp(n[j + k], n[i + k]) if rev cmpv ← -cmpv invert the comparison if cmpv < 0 Suffix (j+k) is smaller. Period is the entire prefix so far. j ← j + k k ← 1 p ← j - i else if cmpv = 0 They are the same - we should go on. if k = p We are done checking this stretch of p. reset k. j ← j + p k ← 1 else k ← k + 1 else Suffix is larger. Start over from here. i ← j j ← j + 1 p ← 1 k ← 1 return [i, p] function crit_fact(n) [idx1, per1] ← maxsuf(n, false) [idx2, per2] ← maxsuf(n, true) if idx1 > idx2 return [idx1, per1] else return [idx2, per2]
The comparison proceeds by first matching for the right-hand-side, and then for the left-hand-side if it matches. Linear-time skipping is done using the period.
function match(n, h) nl ← len(n) hl ← len(h) [l, p] ← crit_fact(n) P ← {} set of matches. Match the suffix. Use a library function like memcmp, or write your own loop. if n[0] ... n[l] = n[l+1] ... n[l+p] P ← {} pos ← 0 s ← 0 TODO. At least put the skip in.
References
[edit]- ^ a b Crochemore, Maxime; Perrin, Dominique (1 July 1991). "Two-way string-matching" (PDF). Journal of the ACM. 38 (3): 650–674. doi:10.1145/116825.116845. S2CID 15055316.
- ^ a b "Two Way algorithm".
- ^ Breslauer, Dany (May 1996). "Saving comparisons in the Crochemore-Perrin string-matching algorithm". Theoretical Computer Science. 158 (1–2): 177–192. doi:10.1016/0304-3975(95)00068-2.
- ^ "musl/src/string/memmem.c". Retrieved 23 November 2019.
- ^ "newlib/libc/string/memmem.c". Retrieved 23 November 2019.
- ^ a b "glibc/string/str-two-way.h".
- ^ "Eric Blake - Re: PATCH] Improve performance of memmem". Newlib mailing list.
- ^ Adamczyk, Zbigniew; Rytter, Wojciech (May 2013). "A note on a simple computation of the maximal suffix of a string". Journal of Discrete Algorithms. 20: 61–64. doi:10.1016/j.jda.2013.03.002.