> < ^ Date: Thu, 03 Sep 1992 12:40:14 +0200
> < ^ From: Steve Linton <sal@dcs.st-and.ac.uk >
< ^ Subject: Re: Tables (was Re: Various suggestions)

Tables and keys:

Sorry I was unclear here. The thing is that in a hash-table,
find-first and related operations that depend on the ordering of
the keys are slow (O(n) or O(n log n)), whereas in a tree type
structure they are O(log n).

Steve continues:

[...] However, possibly hashing is not the right way to go.

Have you considered something like a B-tree or a 2,3-tree (in Knuth
and AHU respectively)? This will cost you log n on a lookup, but
simply dpends on having fast 3-way comparison. This also allows
recovery in order.

The cost of doing a lookup in those trees is about the same as doing
a lookup in a sorted list. The advantage of those trees is of course
that insertion and deletion have cost $O(log(n))$, while insertion
and deletion from a sorted list is a $O(n)$ operation. On the other
hand, finding the <i>-th element in such a tree is also an
$O(log(n))$ operation, while it is an $O(1)$ operation in a sorted
list.

Granted. I don't see any way that a complex structure is ever going to
replace a simple array at the implementation level, though they might be
combined in the syntax (like lists, sets, vectors, vecFFE's and blists are
now). As a general purpose structure, however, I think the objective should
be to do almost everything in O(log n) time (and O(n) space). The change
from O(1) to O(log n) is relatively slight compared to the change from O(log
n) to O(n).

Steve


> < [top]