> < ^ From:

< ^ Subject:

Dear GAP-Forum,

in his letter, David sibley wrote:

gap> g:=ThreeGroup(243,1);; gap> t:=CharTablePGroup(g);;It took just about an hour to construct the character table. This is a

lot more than I am used to for groups of this size (roughly) from

version 3.1. However, I was appalled when I looked more closely and

discovered this group is the cyclic group of order 243. An hour to find

the character table of a cyclic group? (This is actual running time,

according to ps.)

Naturally, it is disappointing, if an algorithm shows poor performance in

trivial cases. However, this special example will also run as fast (or better:

as slow) in GAP-3.1.

To explain this seemingly strange behaviour, one has to look

a bit inside the algorithms:

Computing character tables can be a very tedious task. Especially, there are

some time consuming subroutines, that have to be called very often (like

identification of a class). Thus, the routines do some pre-computation to

make things faster afterwards. However, if you are dealing with an abelian or

even a cyclic group, these computations, do not need to be done, as one can

write down the appropriate character table from scratch. The only real

'task' to be done is attaching the classes of the group to the appropriate

column of the character table.

Thus the question would be: Why doesn't the routine detect, that computation

will be that simple?

CharTablePGroup is a routine, that computes the character table, using the

algorithm of U. Baum as explained by Thomas Breuer in some previous mail to

the forum. In the next version (hopefully, this was left out of 3.2 due to time

reasons and since the computation of character tables of cyclic groups was

not very high on our schedule), the general dispatcher CharTable will provide

an all-purpose routine for computing character tables. This implies, that

computations will be shortened for abelian groups. A routine like

CharTablePGroup still will exist, and provide the specific algorithm, which

might result in poor performance for specific examples.

Is there some flaw in the method being used here?

I would not call this behaviour a 'flaw'. CharTablePGroup is a specific

routine. Adding examinations of special cases to this routine will result in

poorer performance in all other cases, since the test for this special case

must be run always. The handling of these special cases is a task of the

more general dispatcher. The user may then choose, if he uses the general,

or a specific routine: The general routine is slower, but takes the task of

selecting a specific routine itself (though this selection might be not

as perfect, as the one done by a human). It is just a question, how thinking

is divided between human and computer.

I am a bit worried

that there might be similar performance problems if a group happens to

have a large cyclic subgroup or quotient group in the wrong place.

This behaviour most likely will reappear, if the group has a large cyclic

factorgroup. To cope with this beaviour surely is part of the DWIM (Do What

I Mean) problem (baptized this way by Charles Leedham-Green):

If I ask the computer to compute the character table of a large abelian

group, most likely, I'm not interested in a long list of irreducible

characters (unless I want to check memory allocation routines...).

The information, I can make use of, would be, that the group *is* abelian,

that the structure is Z_a x Z_b x ..., and maybe in *some* irreducible

characters, from which the other ones can be obtained by tensor products.

If I have a group with a large cyclic factorgroup, either the group itself

is very large --- then the cyclic factor will not count --- or the cyclic

factor is of disproportional size, compared to the normal subgroup. Then the

group most likely is of some special structure, and much more information

can be gained, if I use this special structure.

Of course this implies, that special algorithms for special structures

should be available, i.e. the program should behave 'intelligent'.

Reaching that point will mean *a lot* of work, I do not even dare to

estimate.

Finally, I still did not answer, how to cope with this behaviour (unless

one wants to wait centuries for the finished GAP 3.$\infty$). I can only

give some suggestions for coping with it:

Before computing a character table, I would check, how many conjugacy classes

the group has. In the problematic cases, the group will have a lot of classes.

Afterwards one can decide, if one *really* wants the character table of that

kind of group. (In case if one wants it, there still is the question,

whether the table should be computed from the group --- which is only

necessary, if one wants a correspondence of class representatives and columns

--- or whether one will compute the table itself by CharTableDirectProduct

or similar.)

In case, this does not help for your particular problem, feel free to ask;

maybe there is a problem, we never thought about.

I hope this rather lengthy pamphlet has shed some light on this problem.

Alexander Hulpke

> < [top]