Saturday, July 31, 2021

HuVVer-AVI testing

I just got a HuVVer-AVI from a friend at Oshkosh (event recap to follow; this is just a quick technical note). It seems like a dandy little instrument platform that could be super useful. It is ESP32 based, and has no graphics acceleration or anti-aliasing of any sort. But it seems rather capable for what it is.

I messed with the code, and eventually got myself this very simple static demo of an "Airball" like display painting to the screen:

The loop() time for each paint iteration was 74 milliseconds. It is possible that the many arcs were taking lots of time to paint. By contrast, the following, which is one of the standard screens but with the colors flipped because I was messing with it:

takes 63 milliseconds to paint.

Currently, Airball operates at a 20fps update rate with time to spare, including saving and painting the last N raw "ball" images in fading color. It is clear to me that the HuVVer-AVI cannot maintain that kind of "instant feedback" feeling. It is an instrument, not a "virtual windsock" that flutters instantly with every little twitch.

But of course: is that important? It is to some people, and not to others. Is it important to our project? How does that compare to the fact that the HuVVer-AVI is very small, and is ready to use, made by someone else? I honestly don't know.


  1. What's the graphics API like? Are you employing drawing optimizations like only painting the portions of the screen which are changing (previous ball location + new ball location), and mirroring the coordinate computations for the repeated arc sections?

    1. The API is pretty simple; it basically bottoms out at some `drawPixel()` primitive that is repeatedly invoked. For my testing, I'm doing nothing fancy -- just painting exactly what you see, once, and figuring out how long it takes.

    2. ... but to address the larger question -- again, with the Raspberry Pi based stuff, I can do 20 fps with anti-aliasing using `libcairo` and really take advantage of the bright, colorful display. Every pixel taking up room in front of an airplane pilot is valuable real estate, and it makes sense to use it as effectively as possible. So even if I were to use a tiny 2.4" screen like this, I'd still want to tie it to a Raspberry Pi or Pi Zero or ... something like that.

    3. > it basically bottoms out at some `drawPixel()` primitive that is repeatedly invoked.

      Sounds just like the time I tried to draw a flying starfield on my Apple IIGS and it was dreadfully slow, "I can see the pixels being written". This is the sort of thing that would be _massively_ improved by optimization like providing a drawRectangle() or at least drawHorizontalLine() that writes a sequence of consecutive identical pixels to the framebuffer, rather than doing a function call and XY-to-address calculation for every pixel.