The next iteration in all four examples should give us a zero in the derivative, therefor it should be the part of the code:
print ("zero derivative, x0=",x0," i=",i, " xi=", x)
By execution of the function we see that for (x2 - 1) we converge to root = 1, while for (x100000000 - 1) we get:
('zero derivative, x0=', 0, ' i=', 0, ' xi=', 0)
We guess this 'mistake' has something with the representation of the floating point of zero in python…
Should this be our direction?