I am writing a program to generate first k numbers of the form

where *a* and *b* are non-negative integers.

The algorithm I am using exploits the fact that every number in this series is generated from adding either 1 or sqrt(2) to a previous number in the series, with the series starting at 0. We’ll collect the numbers in the series in an array where each element is expected to be evaluated lazily on demand. The first number in the series is 0 so we’ll initialize the first element of the array with 0. We’ll maintain two pointers *i* and *j* both starting at index 0 (first number in the series). The next number in the series is *min(A[i]+1, A[j]+sqrt(2))*. If the next number in the series comes from pointer *i* then pointer *i* will be incremented (as we never want to add

1 to A[i] again) and similarly if the next number in the series comes from pointer *j* then *j* will be incremented to point to next index i.e *j+1*. Both *i* and *j* will be incremented if *A[i] + 1 = A[j] + sqrt(2)*.

I think the snippet below captures the above algorithm, however, upon running the code in ghci, it looks like function `f`

is being called repeatedly with parameters 1 0 0 and the program never finishes. I don’t understand how this call is being made although it seems like the call is being triggered by `v A.! i`

.

```
import qualified Data.Array as A
import Debug.Trace
compute :: Int -> Int -> Double
compute x y = (fromIntegral x) + (fromIntegral y) * (sqrt 2)
firstK :: Int -> [(Int,Int)]
firstK k = A.elems v where
v = A.listArray (0,k-1) $ (0,0):f 1 0 0
f c i j
| c == k = []
| otherwise = traceShow (c,i,j) $
let (ix,iy) = v A.! i
(jx,jy) = v A.! j
iv = compute (ix+1) iy
jv = compute jx (jy+1)
in if iv < jv
then (ix+1,iy):f (c+1) (i+1) j
else if iv > jv
then (jx,jy+1):f (c+1) i (j+1)
else (ix+1,iy):f (c+1) (i+1) (j+1)
```

Output –

```
λ> firstK 50
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
(1,0,0)
...
...
```