Finding numerical approximations to π has fascinated people for millenia. One famous formula is
Say is the sum of the first k terms of the series above, and . Here is a fancy way to compute these sequences in a compact code.
k = (1:100)';
s = cumsum( 1./k.^2 ); % cumulative summation
p = sqrt(6*s);
plot(k,p,'.-')
xlabel('k'), ylabel('p_k')
title('Sequence converging to \pi')
This graph suggests that but doesn't give much information about the rate of convergence. Let be the sequence of errors. By plotting the error sequence on a log-log scale, we can see a nearly linear relationship.
ep = abs(pi-p); % error sequence
loglog(k,ep,'.'), title('log-log convergence')
xlabel('k'), ylabel('error'), axis tight
This suggests a power-law relationship where , or .
V = [ k.^0, log(k) ]; % fitting matrix
c = V \ log(ep) % coefficients of linear fit
In terms of the parameters a and b used above, we have
a = exp(c(1)), b = c(2),
It's tempting to conjecture that asymptotically. Here is how the numerical fit compares to the original convergence curve.
hold on, loglog(k,a*k.^b,'r'), title('power-law fit')
axis tight % ignore this line