A fact from Binary logarithm appeared on Wikipedia's Main Page in the Did you know column on 6 January 2016 (check views). The text of the entry was as follows:
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.MathematicsWikipedia:WikiProject MathematicsTemplate:WikiProject Mathematicsmathematics
Same here. I can sort of guess why it works (squaring the scaled input value corresponds to doubling the result), but I would love to see the actual maths behind it.
Yes, I agree. The point of an article like this is to explain how a binary logarithm works, not to show some super-optimized and confusing C version. On the other hand, no one really writes anything in C anymore, unless it needs to run really fast... Moxfyre (ǝɹʎℲxoɯ | contrib) 15:28, 16 July 2010 (UTC)[reply]
Under "Information Theory" the notation lb rather than ld is suddenly used without explanation. Is this a typo? If not, perhaps it should say something like: lg, lx, and lb are sometimes used for base 2 logs.
rather than listing some logarithmic identities, why not say it obeys all logarithmic identities unless some are particularly relevant here.
you really started to lose me with big O notation. Is there a way to make this more accessible?
likewise with bioinformatics
That'll do for now. I don't know if any of that should hold up the GA. I'll take another look today or tomorrow. My main issue is where the article drifts into specialized subjects without explaining enough for a non-specialist.--JFH (talk) 21:31, 28 December 2015 (UTC)[reply]
Images: I'm pretty sure the calculator logo qualifies as de minimis, so no problems I can see.
I'm going to go ahead and pass the article with the recommendation that my comments be addressed, but I don't think they rise to the level of GA. The prose is clear and concise even if some of the subject matter is difficult for a non-specialist. --JFH (talk) 14:03, 29 December 2015 (UTC)[reply]
I don't know enough about Wikipedia to find out who wrote the "Iterative approximation" section, but to whoever did, thank you. Algorithms for calculating a logarithm are surprisingly hard to find, and that section was far and away the clearest and most helpful description I've found. I'm sure that I'm using the talk page wrong, so feel free to delete this section, but I just had to express my gratitude. Cormac596 (talk) 14:47, 1 June 2022 (UTC)[reply]
I think the calculator is helpful and should be retained. I am not too thrilled by the output of "-infinity" for both 0.0 and -0.0, though, as this is only a one-sided limit. Better (and more consistent with the article) to be "(invalid input)" at 0 just like at -1. —Kusma (talk) 10:01, 20 January 2025 (UTC)[reply]
It doesn't seem much more "clutter" than the graph is. I'm not sure I'd put it where it was (it kind of floats oddly across my screen from the table of contents), but I have no objection to including it somewhere. XOR'easter (talk) 00:20, 21 January 2025 (UTC)[reply]
Below is a Wikipedia-style section focusing solely on how the logarithm is used to approximate the inverse square root:
---
Logarithmic Approximation
In the fast inverse square root algorithm, the floating-point number is first interpreted according to the IEEE 754 format, where any positive number can be expressed as
x = M · 2^E
with M representing the mantissa and E the exponent. This means that the binary logarithm of x can be expressed as
log₂(x) = log₂(M) + E.
The algorithm exploits this property by reinterpreting the bitwise representation of x as an integer. Through a clever manipulation—specifically a right bit-shift which effectively divides the binary exponent by two—the method approximates:
1/√x ≈ 2^(–0.5 · log₂(x)).
A magic constant is then subtracted from the shifted value to further calibrate the result, yielding a quick and efficient initial estimate. This bit-level trick is essentially a rapid computation of the logarithmic relationship that underpins the approximation.
---
This concise explanation isolates the role of the logarithm in generating the initial approximation in the fast inverse square root algorithm. Dominic3203 (talk) 10:43, 4 April 2025 (UTC)[reply]
There is no calculation of the logarithm within this algorithm; it is a trick involving the exponent of a floating point number, which is very much not the same thing as the binary logarithm of the number (because it is an integer and the logarithm generally is not). Additionally, any additions along these lines could only be made from published sources making the same arguments. Where are your sources? —David Eppstein (talk) 17:14, 4 April 2025 (UTC)[reply]
It's a little closer to the binary logarithm than that, but still not the binary logarithm. If you interpret the decimal point as being in the right place between the exponent and the mantissa, and adjust for the bias in the exponent, floating point representation is a piecewise-linear approximation to the binary logarithm. It is exactly the binary logarithm when applied to an integer power of two (modulo that decimal point and biased exponent), but is linearly interpolated when it falls between two adjacent integer powers of two. —Quantling (talk | contribs) 18:45, 4 April 2025 (UTC)[reply]
"We denote by an integer that has the same binary representation as an FP number and by an FP number that has the same binary representation as an integer . The main idea behind FISR is as follows. If the FP number , given in the IEEE 754 standard, is represented as an integer , then it can be considered a coarse, biased, and scaled approximation of the binary logarithm ...." doi:10.3390/computation9020021unflagged free DOI (link). –jacobolus(t)01:33, 5 April 2025 (UTC)[reply]
A lot gets hidden in that word "coarse". If the "bias" (additive constant) and "scaling" (where to place the decimal point) are accounted for then floating point representation (with additional accounting for sign bit, NaN, infinities, and subnormal values) is still only a piecewise linear approximation of the binary logarithm. That's a lot to wrap up in the never-defined word "coarse". —Quantling (talk | contribs) 12:51, 15 April 2025 (UTC)[reply]
When the bit representation of a positive (not sub-normal) floating point number is interpreted as a positive integer then the latter can be related to the binary logarithm of the former. Specifically, the floating point value f has the same bit representation as the integer (p(f) + B)S where B is a bias, S is a scaling factor, and the function p(f) is a function that equals k whenever f = 2k for some integer k and that is piecewise linear between these values. For example, this relationship can be exploited to compute a fast inverse square root.
Okay, maybe that could use some wordsmithing. Bigger picture, the idea is to actually describe the relationship rather than to give (possibly misleading) qualitative words that (only loosely) describe it. —Quantling (talk | contribs) 14:17, 15 April 2025 (UTC)[reply]
I think that is far too detailed and too technical to be readable. It would act like an indegestible blob sitting in the article. I think we need something much shorter and punchier, like
The bit representation of a floating point number, reinterpreted as the bit representation of an integer, can be interpreted as a scaled approximation of the binary logarithm. The fast inverse square root algorithm applies this principle by dividing the approximated logarithm by two, rescaling, and reinterpreting the result as a floating point number again.
They add something to the exponent in the floating point representation, so I'd like to see "biased" in there, somewhere near "scaled" (which has to with the number of digits in the mantissa). Also, because it is an inverse square root, instead of "dividing by two" the approach is "dividing by −2" or "multiplying by −1/2". Please boldly edit if you think you've got something that could work. —Quantling (talk | contribs) 18:06, 15 April 2025 (UTC)[reply]
If adding magic constants to the exponent makes it into a binary logarithm, any logarithm is a binary logarithm. What exactly is binary about the logarithms here? —David Eppstein (talk) 18:18, 15 April 2025 (UTC)[reply]
Maybe:
For a floating point number with exponent and fixed-point mantissa , the binary logarithm can be approximated by . The fast inverse square root algorithm operates directly on the binary representation of to multiply this approximation by , converting to an approximation of .
That way we provide some general information about approximating the binary log without diving into the guts of the fast inverse square root algorithm. —David Eppstein (talk) 18:43, 15 April 2025 (UTC)[reply]
I'd say that it is the scale factor that mixes between logarithm bases rather than the additive bias. (I am referring back to the formula i(f) = (p(f) + B)S.) In this case, the scale factor is 252 or something like that, for double-precision floating point, because that's how many bits of mantissa you have to shift over. That is, if we didn't allow the freedom of the scale factor then the floating-point bits minus the bias, interpreted as an integer, would be the logarithm with base . The reason that this is more like a binary logarithm than some-other-base logarithm is that the scale factor is a nice integer power of two for the binary logarithm rather than some transcendental number for other common logarithms. —Quantling (talk | contribs) 20:40, 15 April 2025 (UTC)[reply]
For what it is worth, my p(f) function is p(f) = ⌊log2f⌋ + (f - 2⌊log2f⌋) / (2⌊log2f⌋). Also, S = 252 and B = 1023. And together that is
One sentence. Not line after line of derivation using technical terms that assume the reader has intimate knowledge of IEEE floating points. We need ONE SENTENCE that describes the approximation of the log from the floating point exponent and significand, and then one sentence briefly stating the usage of it in the fast inverse square root algorithm. Neither your nor Quantling's explanations are usable as readable article text. Reading further from the fast inverse square root article, it appears the approximation they are using is not k+(m-1)/ln2 (which would be good for m very close to 1) but rather k+(m-1)+sigma for a magic constant sigma chosen to tune the approximation. —David Eppstein (talk) 00:42, 16 April 2025 (UTC)[reply]
In case it needs stating: because this piecewise linear approximation to the binary logarithm is always less than or equal to the binary logarithm the authors of the fast inverse square root algorithm observed that one could get closer to the real value on average by adding a small constant. They tuned that constant to get σ.
But I (now) agree with @David Eppstein — we need one sentence that describes that the floating point representation can be used to approximate the binary logarithm. As far as the fast inverse square root goes, in the present article I wouldn't give more than an acknowledgement that the fact of the first sentence is exploited for that. —Quantling (talk | contribs) 13:51, 16 April 2025 (UTC)[reply]
I also modified the PPAP song:
"I have a log base 2 of a, I have a log base 2 of b,"
which would work no matter what the base of the logarithm is. The as_int(1.0) appearances take care of the bias, and the scale factor is never explicitly used. (In fact, the base of the logarithm is but nobody cares.) —Quantling (talk | contribs) 17:34, 16 April 2025 (UTC)[reply]