tin
Now, however, we encounter an apparent conundrum: the main point of the curve fit is to determine a best possible value for  \(V_0\) (among other variables) but solving for \(\delta V\) requires a numerical value for \(V_0\). How do we determine the numerical values for  \(\delta V_{pd}\left(\theta\right)\) needed  to find a best estimate of \(V_0\) if we don't already know \(V_0\)? This  is not the impasse that it might seem because the curve fitting algorithm always requires that we supply an initial guess for the parameters. In this situation,  however, we carry out the curve fitting algorithm repeatedly instead of just once, using the best fit values output by the algorithm as our new initial values. Once the output values match the input values (within uncertainty), we stop. When the output values match the input values, we say the results are self-consistent.
Here is the procedure for finding self-consistent 'best fit' values for \(V_0\)\(V_1\), and \(\theta_0\) from the curve-fitting routine
  1. Make a rough initial guess for the parameters \(V_0\)\(V_1\), and \(\theta_0\) from a graph of the data.
  2. Use the values of  \(V_0\)\(V_1\), and \(\theta_0\) output by the curve-fitting routine as a new 'initial guess'
  3. Repeat the curve fit  (using each output as a new input) until \(V_0\) stops changing. 
Note: if you know how to do programming in Python, this would be a great place to introduce a  loop into the code. Because we want this guide to be useful even to beginners, we haven't done that here (see FIg. \ref{627293} for attached code ) but we do plan to add a section on how to do that sometime in the future.

fitting the model to data

calculating best fit values

We now turn to the actual Python code for non-linear curve fitting. Notice that  this is a  "weighted" fit, in that the stated uncertainty of each data point is taken into account during the fit. Practically speaking, this means the curve-fitting routine tries harder to match the model to the data at points with a smaller uncertainty (although it may not succeed) because those points are given greater importance ('weight').
Here we assume that values have already been experimentally determined for uncertainties in  \(V_0\)\(V_1\), and \(\theta\). We will leave these unchanged throughout the curve-fitting process.