In computer science, the iterated logarithm of , written log*  (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to .[1] The simplest formal definition is the result of this recurrence relation:

Figure 1. Demonstrating log* 4 = 2 for the base-e iterated logarithm. The value of the iterated logarithm can be found by "zig-zagging" on the curve y = logb(x) from the input n, to the interval [0,1]. In this case, b = e. The zig-zagging entails starting from the point (n, 0) and iteratively moving to (n, logb(n) ), to (0, logb(n) ), to (logb(n), 0 ).

In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than , not only for base and base e. The "super-logarithm" function is "essentially equivalent" to the base iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.[2]

Analysis of algorithms

edit

The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:

The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:

 

the inverse grows much slower:  .

For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.

The base-2 iterated logarithm
x lg* x
(−∞, 1] 0
(1, 2] 1
(2, 4] 2
(4, 16] 3
(16, 65536] 4
(65536, 265536] 5

Higher bases give smaller iterated logarithms.

Other applications

edit

The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is  .

In computational complexity theory, Santhanam[6] shows that the computational resources DTIMEcomputation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to  

See also

edit

References

edit
  1. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) [1990]. "The iterated logarithm function, in Section 3.2: Standard notations and common functions". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 58–59. ISBN 0-262-03384-4.
  2. ^ Furuya, Isamu; Kida, Takuya (2019). "Compaction of Church numerals". Algorithms. 12 (8) 159: 159. doi:10.3390/a12080159. MR 3998658.
  3. ^ Devillers, Olivier (March 1992). "Randomization yields simple   algorithms for difficult   problems" (PDF). International Journal of Computational Geometry & Applications. 2 (1): 97–111. arXiv:cs/9810007. doi:10.1142/S021819599200007X. MR 1159844. S2CID 60203.
  4. ^ Alon, Noga; Azar, Yossi (April 1989). "Finding an approximate maximum" (PDF). SIAM Journal on Computing. 18 (2): 258–267. doi:10.1137/0218017. MR 0986665.
  5. ^ Cole, Richard; Vishkin, Uzi (July 1986). "Deterministic coin tossing with applications to optimal parallel list ranking" (PDF). Information and Control. 70 (1): 32–53. doi:10.1016/S0019-9958(86)80023-7. MR 0853994.
  6. ^ Santhanam, Rahul (2001). "On separators, segregators and time versus space" (PDF). Proceedings of the 16th Annual IEEE Conference on Computational Complexity, Chicago, Illinois, USA, June 18-21, 2001. IEEE Computer Society. pp. 286–294. doi:10.1109/CCC.2001.933895. ISBN 0-7695-1053-1.