When I was little, my mom often made the mistake of asking me to get some ingredient for her out of the fridge. She assumed, reasonably, that delegating a simple task like that would speed up her efforts to get dinner on the table. But. I had a simple approach. First, regardless of the needed ingredient, leftover chicken, chopped onion, salad dressing, etc., I would look at the top shelf, where the gallon of milk obscured all the other items on the shelf. I would then scan the lower shelves. I felt strongly that looking for an ingredient was an activity for the eyes, not hands. So, no opening drawers or shifting things around on the shelves. Also, the refrigerator door was off limits. Thatâ€™s where the mayonnaise was, and if Mom wanted something near the mayonnaise jar, she wouldnâ€™t have asked me to find it. If I was lucky enough for a front row request, I could give it to Mom and feel proud of my contribution to dinner. The more likely scenario was getting caught in an infinite loop of scanning the shelves top to bottom, not finding the ingredient, forgetting what the ingredient was, questioning whether the ingredient was really essential to the meal, and thinking how much I would prefer to have a different meal. Eventually, my mom would come over to the fridge, shift the watermelon a bit, and reach in to get what she was asking for. And I felt as though I had really helped, eliminating the front of the fridge items and all. The main difference in our searching algorithms, besides general competence, was that my mom had some *a priori* knowledge of the contents of the fridge. She was the one who loaded the groceries, put leftovers in tupperware, and organized the fridge. If my mom had to cook from a friendâ€™s fridge, Iâ€™m sure she would get caught for a moment in the fridge stare, before asking where something was kept.The key insight of this delightful metaphor is that you can save a bunch of time if you know the characteristics of your system.

This week our team was working on a problem where we needed to find roots of some fairly complex single-variable equations.Â Initially, we looked to the Ruby Standard Library which has a method, `#nlsolve`

, that is designed to find roots. This solver ran a bit slow and got hung up on some initial guesses (it did the fridge stare). Looking at the implementation, we saw that this method was generalized to solve multi-dimensional systems which entailed finding the Jacobian matrix of the system and handing it off to an LU solver.Â This was overkill for our needs.Â We knew quite a bit about our equations. We knew the general shape of each curve, and we knew they each had a single root. There were also some constraints on the curve that allowed us to make a decent initial guess. Solving this problem wouldn’t be too complicated, so we decided to write our own root solver for a single equation, streamlining it to our needs.Â Here’s the secret sauce of how we did it:

**Start with Bisection as a Base**

The Bisection method is like a binary search for your root.Â You start with a high and low guess for a root. You choose high and low guesses that evaluate to values with opposite signs, meaning somewhere in between the values the curve crosses the x-axis. The method uses the average of the high and low guesses as the new guess.Â If the new guess is negative, it becomes the new low, and if it is positive, it becomes the new high.Â The method continues cutting its searching scope in half until you get close to a root or it determines there is no root in the scope.Â This method works well to converge quickly, even if you don’t have a great initial guess.Â It looks a little something like this:

def solve(f = @f, low = @low, high = @high, tol = @tol, n = @n) x = (high + low) / 2 y = f.call(x) if y.abs < tol #successfully found root x elsif n <= 0 #break out of solver after so many iterations x else #narrow window of searching by half if y > 0 high = x else low = x end solve(f, low, high, tol, n - 1) end end

For our problem, we use Bisection as a way to find a betterÂ first guessÂ to passÂ off to Newton.

**Add a little Newton**

Newton’s method is an iterative approach that starts with an initial guess for a root, `x`

, and refines the guess by traveling along the curve based on its slope, where the new guess is defined as _{0}`x = x`

. This method is good for finding a root when you have a good initial guess.Â However, it can become unstable if it gets close to a horizontal slope (which is like dividing by zero in the equation above). Thereâ€™s also the rare possibility that you could get stuck in an endless loop, where one guess cycles you back to a previous guess. Hereâ€™s a look at the core of our Newton’s method._{0} - f(x_{0})/f'(x_{0})

def solve(f = @f, x0 = @x0, tol = @tol, n = @n, eps = @eps) y0 = f.call(x0) y_prime = (f.call(x0 + eps) - y0) / eps raise NonconvergenceError.new if y_prime.abs < tol x = x0 - y0 / y_prime y = f.call(x) if y.abs < tol #successfully found root x elsif n <= 0 #not within threshold after n iterations. x else solve(f, x, tol, n - 1, eps) end end

**Adjust the Recipe to Your Liking**

We packaged up our root solving methods in the aptly named gem, root_solver.Â The gem offers three methods for finding roots: Bisection, Newton, and a combination of the two that starts with Bisection then hands it off to Newton. Each requires a bit of configuration, which you can customize for a specific equation. The best thing you can do for this root solver is to choose your initial guess well.Â In addition, you can adjust the tolerance for root equality and the maximum number of iterations to run. This solver will return a value after a configured number of iterations, so the client may need to verify the solution if the number of iterations is low.

**Serve It Up**

All three root solvers in the gem respond to `#solve`

, so that a client wonâ€™t have to know which root solving method it is using. The root solver objectÂ expects a callableÂ (proc, lambda, PORO responding to call). The callable should take one argument,Â our `x`

, and the callable should evaluate our function at `x`

. Example callable:

f = Proc.new { |x| x ** 2 - 10 } # Function for which we wish to find roots l = 0 # An educated guess for the lower limit h = 5 # An educated guess for the upper limit t = 1e-3 # How accurate do we want to be RootSolver::BisectionNewton.new(f, l, h, t).solve => 3.1622776375625614

**Share and Enjoy**

We were able to speed up our calculations by an order of magnitude, just by simplifying the root solver to the special, but most common case, of a single equation. Â The method in the Ruby Standard Library generalizes for a multi-dimensional system of equations, but it is likely that you just need a good, old-fashioned, single dependent variable Newton or Bisection solver like we provide in this gem. Â If that’s your appetite, please, help yourself to our Â root_solver, and enjoy.