Exact results are usually not possible when using floating point arithmetic. With a fixed number of bits per number there will nearly always be a small error.

To determine when the answer is close to correct,
compute its square.
When `oldGuess`

*`oldGuess`

is very close to N, then

N/(oldGuess*oldGuess) == almost 1.00

or

N/(oldGuess*oldGuess) - 1.00 == almost 0.00

Unfortunately, we don't know if "almost 0.00" will be negative or
positive,
so we need to take the absolute value of it to make sure that it is
positive.
The **absolute value** of a number is the number with its negative
sign (if any) removed.
In math books the absolute value of x is
shown as

| N/(oldGuess*oldGuess) - 1.00 | == almost 0.00

Now we need to decide what "almost 0.00" means. If the variables are all of type double precision, then they will have about 15 decimal places of precision. To be safe, assume that 14 places of precision can be reached. "Almost zero" to 14 places of precision means "less than 0.00000000000001"

What is 0.00000000000001 in scientific notation?