My feeling is that all these bit-paralellism and speed-up approaches to the
basic string-to-string edit distance are greatly dependent on the raw data
you are working on.
If I understand well, you plan to filter the text, in order to reduce the
area where dynamic programming needs to be used. Such algorithms can achieve
sublinear time in most cases, however this is true only for low error ratio
(useful in biocomputing, not in OCR !). What it the foreseen application ?
This archive was generated by hypermail 2b29 : Mon Dec 03 2001 - 11:04:26 MET