Also, a shortened and shrunk (to fit under Imgur's 2MB limit) GIF, which should be more epileptic-friendly.

This post describes the maths that went into moving the pictures around so that they make a smooth-ish animation. First though, a note on file formats, for anyone Googling. Since I was fitting curves in 3-space, I used the *_GEOMED.IMG files, which are corrected for geometric distortion. Unlike the RAW and CLEANED images (see my earlier post), the GEOMED files are 1000x1000 pixels and 4096-colour greyscale, come with a 2000-byte header, and have some corrupted bytes. Octave code:

odd_nums = 1:2:1999; even_nums = 2:2:2000; in_file = sprintf("C4389359_GEOMED.IMG"); out_file = sprintf("C4389359_GEOMED.png"); fid_in = fopen(in_file, "r"); [hdr, ct2] = fread(fid_in, 2000); [img_array, ct2] = fread(fid_in, [2000 Inf]); img_array = img_array'; img_array_evens = img_array(:, even_nums); % Zero the corrupted bytes: img_array_evens(img_array_evens>15) = 0; img_array_odds = img_array(:, odd_nums); img_array = 256*img_array_evens + img_array_odds; img_array = double(img_array)/4095; imwrite(img_array, out_file); fclose(fid_in);

**Theory**

The Voyager image files tell us that the field of view in both x- and y-directions for the geometrically-corrected images is 0.4493 degrees. Since all of the GEOMED images are 1000x1000 pixels, this means that there is a simple relation between pixels and angles, and below I'll usually treat this as obvious and gloss over it.

Note that in many steps below, angles are calculated with inverse trig functions. It's easy to lose a sign when taking an arccos, or end up on the wrong side of τ/4 when taking an arcsin. In my code I just fudged signs until it worked – the field of view is small enough not to worry any more than that.

**First steps**

To begin, suppose that our camera is sitting at the origin, pointing in the direction of the positive x-axis, that is pointing parallel to the vector

**i**= (1, 0, 0). It is oriented in the natural way for the Cartesian coordinates, so that "up" is

**k**= (0, 0, 1). There is an object at position

**r**. Where does the object appear on the camera screen?

We can work this out by placing an imaginary unit sphere around the camera, and describing each point on the sphere with latitude and longitude coordinates (α, β). The centre of the picture taken by the camera will be at (α, β) = (0, 0). Any point on the sphere can be written as (cos(β)cos(α), -sin(β)cos(α), sin(α)).

The position of the object on the camera screen is then given by the latitude and longitude of the point where the vector

**r**intersects the imaginary unit sphere. This point is simply

**r**/ |

**r**|. From the above expression for any point on the sphere, we see that

**r . k**/|

**r**| = sin(α), so that the latitude α = arcsin(

**r . k**/|

**r**|).

Similarly,

**r . i**/ |

**r**| = cos(β)cos(α), so the longitude β = arccos[

**r . i**/ |

**r**|cos(α)].

These are our basic building blocks for determining where an object is displayed in a photo, but there are two immediate generalisations that we'll need. Firstly, the camera could be pointing in the direction

**i'**instead of

**i**, with "up" being

**k'**. Secondly, the camera could be at some position

**c**, so that instead of using

**r**in the above equations, we would use

**v**=

**r**-

**c**. So:

α' = arcsin(

**v . k'**/ |

**v**|), and

β' = arccos[

**v . i'**/ |

**v**| cos(α')].

Note that we could alternatively write

β' = -arcsin[

**v . j'**/ |

**v**| cos(α')].

**Transformations on latitude and longitude coordinates**

In this section, we start with some point

**v**= (cos(β)cos(α), -sin(β)cos(α), sin(α)), and see what happens to α' and β' when we transform to

**i'**,

**j'**,

**k'**.

*Rotations about*

**i**

*by an angle ξ*

In this example, the centre of the picture won't move, and points will rotate around the centre.

The rotated axes are described by

**i'**=

**i**,

**j'**=

**j**cos(ξ) +

**k**sin(ξ),

**k'**= -

**j**sin(ξ) +

**k**cos(ξ).

The new latitude is given by

α' = arcsin(

**v . k'**) = arcsin[sin(β)cos(α)sin(ξ) + sin(α)cos(ξ)].

And once we have α', we can calculate the new longitude

β' = arccos[

**v . i'**/ cos(α')] = arccos[cos(β)cos(α) / cos(α')].

*Moving the centre of the picture to latitude and longitude (γ, δ)*

This transformation might be called a translation, but because we're working on a sphere, it takes the form of two rotations (well there are an infinite number of ways to represent this transformation, but I think that the one I've used is the easiest).

The first rotation is spinning the equatorial plane of the sphere about the

**k**-axis, so that the longitude lines match up to where we want them. That is, β' = β - δ. In terms of the axes, this is:

**i'**=

**i**cos(δ) -

**j**sin(δ),

**j'**=

**i**sin(δ) +

**j**cos(δ),

**k'**=

**k**.

Since we're only spinning the equatorial plane, the latitude doesn't change, so α' = α.

The second rotation is about

**j'**, tilting the equatorial plane so that latitude γ is in the centre of the picture. This requires

**i''**=

**i'**cos(γ) +

**k'**sin(γ),

**j''**=

**j'**,

**k''**= -

**i'**sin(γ) +

**k'**cos(γ).

So

α'' = arcsin[-cos(β')cos(α')sin(γ) + sin(α')cos(γ)],

and hence

β'' = -arcsin[-sin(β')cos(α') / cos(α'')] = arcsin[sin(β')cos(α') / cos(α'')].

**Application to Saturn's rings**

Voyager 2 took a sequence of 159 photos of spokes in Saturn's rings specifically for the purpose of making a video. Since the camera wasn't pointing at the same spot in each photo, the images need to be re-centred somehow before they make a smooth animation. The goal of this section is to describe how to fit curves to the rings as they appear in the Voyager photos. Once we are able to do this, we'll be able to go from a point in 3-space on the ring plane, to a pixel in the photo. And by reversing the procedure, we'll be able to take a pixel and say calculate where in the ring plane it is. And once we have

*that*, we can transform the ring plane coordinates to a new image, with the latter transformations smoothly varying (or even constant) over the animation.

I used HORIZONS to get the (unit) normal vector

**n**to Saturn's equatorial plane (i.e., the ring plane), using one of the moons whose orbit has a very low (~0.001 degree) inclination to the plane. Take any three moon positions

**p**

_{1},

**p**

_{2},

**p**

_{3}, calculate the cross product of (

**p**

_{1}-

**p**

_{2}) with (

**p**

_{1}-

**p**

_{3}), and normalise.

The first task is to find the coordinates of a circle in the ring plane. We start with the parametrised equations in the xy-plane, (r*cos(t), r*sin(t), 0). Then we rotate (tilt) the plane. Since we want the plane to end up normal to

**n**, we'll rotate along an axis normal to (n

_{x}, n

_{y}, 0). Defining θ = arctan(n

_{y}/n

_{x}) - τ/4, this axis is the line y = x*tan(θ) at z = 0 (in the following I'll omit mention of the "z = 0".)

It is useful to define rotated unit vectors

**i'**=

**i**cos(θ) +

**j**sin(θ) and

**j'**= -

**i**sin(θ) +

**j**cos(θ). Note that

**i'**is pointing along what will be the axis of rotation, and that

**j'**is orthogonal. The perpendicular distance h from a point (x

_{0}, y

_{0}, 0) to the line y = x*tan(θ) is equal to (x

_{0}, y

_{0}, 0)

**. j'**= -x

_{0}*sin(θ) + y

_{0}*cos(θ).

The perpendicular from (x

_{0}, y

_{0}, 0) meets the line y = x*tan(θ) at some point (x

_{1}, y

_{1}, 0). The distance d from (x

_{1}, y

_{1}, 0) to the origin is (x

_{0}, y

_{0}, 0)

**. i'**= x

_{0}*cos(θ) + y

_{0}*sin(θ). We also have that x

_{1}= d*cos(θ) and that y

_{1}= d*sin(θ).

Since (x

_{1}, y

_{1}, 0) = d

**i'**, we can write the position (x

_{0}, y

_{0}, 0) as d

**i'**+ h

**j'**, which is now in a form ready for rotation. We rotate by an angle φ = arccos(n

_{z}) about an axis parallel to

**i'**, so that the transformed unit vector

**j''**is given by

**j'**cos(φ) -

**k**sin(φ).

So the point (x

_{0}, y

_{0}, 0) is transformed to d

**i'**+ h

**j''**, or

(x

_{0}, y

_{0}, 0) → (d*cos(θ) - h*sin(θ)cos(φ), d*sin(θ)+h*cos(θ)cos(φ), -h*sin(φ)).

Of course this expression could be expanded further by substituting back the full forms of d and h, but it is a mess. In any case, we can substitute in r*cos(t) for x

_{0}and r*sin(t) for y

_{0}, and we have the parametric equations for a circle in the ring plane.

Now that we can make a circle in the ring plane, we can in particular calculate the coordinates of the Huygens gap, which appears in all of the photos in the sequence, and whose radius is 1.1758e5 km. The next step is to work out where in each photo the Huygens gap is.

To do this, we'll first need some auxiliary information. Each Voyager photo is timestamped, and we can use HORIZONS to get Voyager's position

**p**at the given time relative to Saturn. The procedure then to get the position in pixels of each point in the Huygens gap is:

1. Rotate

**i**and

**j**by an angle ζ about

**k**so that

**i'**= (-p

_{x}, -p

_{y}, 0). For consistency of notation, let

**k'**=

**k**.

2. Rotate

**i'**and

**k'**by an angle η about

**j'**so that

**i'**now points from Voyager to the centre of Saturn.

3. Using these unit vectors, calculate the latitude and longitude α and β of each point on the Huygens gap as seen from Voyager (i.e., subtract off the position of Voyager from the position of the point in the Huygens gap, take dot product with

**i'**, then arcsin etc.).

4. Rotate these pixels by some angle to be fitted.

5. "Translate" the results by some angular distance to be fitted.

The fitting requires searching through a three-dimensional parameter space (rotation angle, translation in longitude, translation in latitude). It's not hard to teach the computer where the Huygens gap is in the picture: start roughly in the middle of the picture, go up until you hit some dark pixels, follow the dark pixels left and right. Then once we know the location of the Huygens gap in the picture, loop through the parameter search space, calculate the pixel-location of the Huygens gap with the current parameters, see how far away from the true Huygens gap you are, output the best parameters.

Confession: I've actually done the fitting much more carefully for this post than for the video, and I now think that some of the bumpiness of the video is due to my earlier carelessness. Perhaps one day I'll re-do all the calculations.

This is what the fitted Huygens gap looks like:

It's not too bad – perhaps it could be improved with a finer search of the parameter space, but it's clearly quite close. As a check, we can keep the same fitted parameters, but also plot the calculated location of the Encke and Maxwell gaps:

Pretty good! (With the parameters used for the video, these latter fits are mediocre albeit roughly in the right places.)

Loop over all the photos, get all the fitted parameters (in practice I only fitted the rotation for the first and last photos, linearly interpolating for the rest), and get ready to back-transform.

The back-transformation takes each pixel (and hence a latitude and longitude) to a point on the ring plane. First you undo the five transformations listed in the earlier procedure, so that you can take the resulting angles and construct a vector

**v**of the form (cos(β)cos(α), -sin(β)cos(α), sin(α)).

We are given Voyager's position

**p**, so we need to find the value of s for which

**p**+ s

**v**intersects the ring plane, which has normal vector

**n**. This problem has a nifty solution which I learned in year 12 but had since forgotten: since any point in the ring plane dotted with

**n**is equal to zero (the plane goes through the origin), we require (

**p**+ s

**v**)

**. n**= 0, and so s = -

**p . n**/

**v . n**.

From the value of s we get the point in 3-space in the ring plane, and then we run the five-step procedure forwards again, but this time using smoothly varying parameters, outputting a new image. And we're done!

One final aside: you may notice some small grey splotches with a diameter of about 10 pixels in the earlier pictures – they're especially noticeable near the lower-right corner. These are where the reseau markings are (Voyager's vidicons introduce some distortion depending on brightness and exposure time, so the fixed reseau tells you where everything should be). In the raw images, these are dark pixels, so NASA's processing algorithm fills them in with the average value of the surrounding pixels. This introduces obvious artefacts in the pictures of the rings, but we can fill them in more appopriately in this case by calculating the ringlet each smudged pixel is on, and linearly interpolating pixel values along that ringlet.