I joined the University of Colorado at Boulder (CU Boulder) after 4.5 years in the space and software industry. I’d learned from the Go programming language about a year before I joined CU when I was working at Sparrho. I used it mainly to write web backend software, specifically a tool called
gofetch: it would using AWS S3 as an index and storing mechanism to download twenty-two thousand pieces of content on a daily basis in about six minutes: that was a huge improvement on what we were using at the time. As I was learning Go, a coworker recommended I check out Rust – which I did but not enough to start learning it.
Anyway, at CU, I had to code up a bunch of introductory astrodynamic software. I initially used Matlab with a free academic license and the professors would give us some sample code. I’ve known Python since 2005/2006, and wrote it as a job for years, so I switched my code to that. Astrodynamics requires a lot of linear algebra (matrix operations): Python isn’t well suited for this. Instead, folks tend to use numpy. I quite dislike the syntax of numpy, so I was seeking an alternative. I knew Go and I didn’t have much time to code up something that worked. Hence, the quasi-totality of my CU work was done in my space mission design (smd) toolkit.
I learned a couple of things from this.
- Go is really not well suited for scientific computing (bad vector and matrix math libraries with runtime dimension panics – they ever trigged, lots of silent math errors);
- Go requires (required?) external libraries to compile the linear algebra stuff, and that was a pain when I had to deploy code on the Google Compute Engine for my thesis;
- No good plotting tools.
One of the last sentences of my Master’s thesis defense was that I learned Go was not well suited for scientific work and I wanted to rewrite the toolkit in Rust.
I had decided I really wanted to learn Rust and sought a project which would help me learn. My usual approach is to convert a problem I’d already solved before into this new language I want to learn.
I started working on Nyx over Christmas of 2017 while I was an aerospace and software engineer at Advanced Space in Boulder Colorado. From the ground up, I developed propagators and orbital state computations relying a good amount on NASA GMAT. A few commits here and there, and eventually it supported two body spacecraft orbit determination and basic spaceraft attitude dynamics. These varied stated sizes were possible in Nyx from the start thanks to the monomorphism concepts in Nyx: they effectively work like super basic C++ templates where the input vector to the functions can vary. This was possible only thanks to nalgebra, by far the most complete linear algebra library in Rust. It handles tons of super useful things out of the box, including compile-time dimension checking for matrices and vector computations!
By all practical metrics, I became the engineering lead on the Cislunar Autonomous Positioning System when Advanced Space won its NASA SBIR Phase 2 funding. We had a license of JPL MONTE and tried for months to develop CAPS with it. It turns out that MONTE does not support “ground stations” which are attached to spacecraft. Since Nyx already supported variable state sizes, it was relatively easy to setup a new set of
Dynamics (cf. the Nyx documentation) for propagating two spacecraft in sync, and generating range and range-rate measurements between them.
Advanced Space accepted that I try to see whether we could use Nyx for the CAPS simulations. That was the first time I could work on Nyx while being paid! In fact, all other development was done during nights and weekends. So that’s when I added spherical harmonics, time-varying state noise compensation, smoothing and iterations to the filter.
By the time I was to leave Advanced Space for Masten Space Systems, I wanted to clarify that I owned the code for Nyx. We agreed on that with the CEO, and that’s when I really started working much later nights on Nyx.
As of today, Nyx is nearing version 1.0. What new developments led to this version?
- I rewrote the internal logic almost entirely. The
Dynamics, which define the equations of motion of something, no longer store the state of the system: this allows several propagators to share a single set of dynamics thereby enabling trivial Monte Carlo analyses without using any more memory space as a single propagation (apart from the space for the state vector of the system, which is 42
f64s for an orbit propagation). As an example, this means that one hundred spaceraft can be propagated one day forward in time with a 70x70 spherical harmonics field around the Earth and the gravity pull of the Earth, the Moon, the Sun and Jupiter all in … 4.98 seconds! In GMAT, that kind of propagation for a single spacercaft takes at least twice as much time (and up to 10 minutes for 150 day propagations from my experience).
- Interpolated trajectories (Lagrange interpolation) which is used for perfect stop conditions (e.g. “propagate until the periapse passage in this frame”, and the TA will be 0 with 1e-6 degrees), plus a conversion of the trajectories into whatever frames one wants (as long as it’s a frame loaded in the XB).
- Significantly more precise time management, which use rationals instead of doubles. I think it’s four times the precision because the time is represented as an 128-bit unsigned integer on the numerator, a 32 bit unsigned integer on the denominator, and a sign flag of course. This is part of the 2.0 rewrite of hifitime.
- Trajectory optimization with a proximal averaged Newtonian Optimization engine (PANOC). That’s a relatively new optimization scheme (initial paper from 2017) for very large problems which is significantly faster than other methods. It’s especially suited for optimal control problems as shown in this YouTube video by one of the two developers. In short, it’s almost two orders of magnitude faster than IPOPT! Once this works, I’ll be writing a (conference?) paper on this method for spacecraft trajectory optimization as none have been published yet: I’m quite excited!