Kempston Joystick Interface Stripdown

Kempston Joystick Interface Stripdown

One thing you seem to accumulate a lot of when acquiring ZX Spectrums is a variety of peripherals. Joystick interfaces, in particular, are one that seem to turn up tim and time again.

I was recently sorting through them to find a working one to send over to @breakintoprog and one I tested was broken.

This is a post about me stripping it down and figuring out what makes it tick.

ZX Spectrum Joystick Interfaces

If you wanted to use a joystick with a Spectrum you had to buy an add-on joystick interface. These connected to the edge connector at the back of the machine and featured one or more Atari 9-pin joystick ports on them.

The Spectrum had a few competing joystick standards at the time. We had the Protek/AGS “cursor” type that mapped its inputs to the cursor keys and the Kempston standard which went a different path. There were also “Fuller” and “Programmable” types, but I don’t remember those being as common.

Sinclair released the “Interface 2” which mapped its inputs to the number keys but also had a different port wiring, making the actual joysticks themselves incompatible with the Atari 9-pin standard that other platforms supported. It wasn’t until Amstrad released the +2 that the Spectrum came with an in-built set of joystick ports at all.

Generally a game for the Spectrum would end up supporting Cursor, Kempston and “redefinable keys” out the box. This latter option made it possible for us humble Sinclair owners to actually use our crappy Sinclair joysticks (seriously, have you ever used one of those things?!).

The Kempston Standard

One thing that makes the Kempston standard appealing to us Z80 assembly programmers is that it is extremely easy to work with. You issue a single IN to port $1F and read a byte back with the form 000FUDLR (active bit high).

Other standards required checking multiple ports that were associated with the keyboard (hence them being key mappable). The latter made it easier for BASIC programmers or people who wished to use the “redefine keys” options in games - this simply wasn’t possible with a Kempston joystick.

Just knowing this tells us a little about the hardware itself.

  • It responds only to an I/O READ on port $1F
  • It does not respond to memory reads, writes or I/O WRITEs to the port.
  • It uses 5 bits, so it must use 5 data lines on the edge connector
  • It bits as HIGH when an action is set, thus the default state for each bit is LOW

Kempston Teardown

Having a broken Cheetah interface that was known to be Kempston compatible and an afternoon on my hands, I decided to try my first reverse engineering of the hardware.

Note; this has been done many times by many people, so there’s really nothing “new” to be learned by the community. However it’s my first time trying something like this; and having not yet read/watched an existing teardown, I was able to approach it without outside influence.


The case was a bit fiddly to open, it was clipped in plastic (no screws), but when I popped it open I saw this.

Cheetah Kempston Front

Here we have:

  • 9-pin “Atari” joystick port
  • 74LS366 logic IC
  • 74LS138 logic IC
  • 2x diodes
  • 5x resistors
  • 1x jumper wire
  • 1x edge connector

Resistor Colour Bands

When you check the colour codes:

    Brown  = 1
    Black  = 0
    Orange = x 1k
           = 10K

I’m terrible at decoding the colour bands on the resistors, so I measured them with my multimeter - they all came out as 10K, which is what the code says.

As there’s 5 of them and they’re all high resistance levels, it’s pretty easy to deduce that they’re used as pull-up or pull-down resistors for each of the 5 inputs. We’ll check this a bit later as we go further.

Flipping it over we see:

Cheetah Kempston Back

Just a mess of traces - we’ll decode all that shortly.

The ICs

The two integrated circuits on the board are a 74LS366 and a 74LS138.

Looking up the datasheets tells us this:

  • 74LS138: “One-of-8 line decoder/demultiplexer”
  • 74LS366: “Tri-state line driver/buffer with inverter”


Here’s the pin out of the 74LS138:

74LS138 Pinout

… and the logic diagram:

74LS138 Logic

E1, E2 and E3 are part of an AND gate; with E3 being active HIGH. This gate is used to control the ENABLE state of the chip.

We then have 3 bits of (A0-A2) address inputs and 8 bits of output.

Looking at the truth table and the functional description of this chip, it looks like the job is to take the combination of the three enable inputs and the three address bits and output a mutually exclusive bit on the output. The only time the outputs are all the same is when the ENABLE gate condition isn’t being met; in this case the outputs will all be HIGH.

We can early deduce that this chips is likely the one that decodes the IOREQ/READ and the port address; with likely one of the outputs being used to chip select the 74LS366.


Here’s the pin out and logic diagram of the 74LS366:

74LS366 Pinout

74LS366 Logic

Here we see that the chip has 6 inputs (A1-6) and 6 inverted outputs (Y1-Y6). These are controlled by two active low inputs ~G1/~G2 configured in an AND gate.

This chip is very likely to be the one that handles the input from the joystick, mapping it onto the output data lines of the edge connector if it’s active. This latter point is important, otherwise the joystick would just trash the data lines outside of the IOREQ.

Tracing the, umm, Traces

First up, let’s overlay the front and the back of the PCB.

Cheetah Kempston PCB Overlay

Here we see that the 74LS138 is at the “bottom” of the board and the 74LS366 is the one sat in the middle. The resistors are connected to the joystick port pins and one of the pins on the 74LS138.

Based on what we already learned from the datasheet on the 74LS138, we can speculate that this could be one of the ENABLE gate inputs.

Using a paint program, I started annotating the traces and some of the pins.

Cheetah Kempston PCB Traces

This is beginning to tell us a great deal already!

We now know that it’s using address bits A5-7 on the expansion, involves ~IOREQ & ~RD and ~M1 for control and touches all of the 8 data bits.

When we turn the port address $1F to binary we get:

    7 6 5 4 3 2 1 0
    0 0 0 1 1 1 1 1 = $1F

It’s an inversion of the bits 7, 6 and 5 - these are exactly the address bits that have connections on the expansion bus!

Looking at the data lines here we can see that D0-5 are connected directly to the 74LS366; D6 and D7 are connected to the anode (+ve) end of diodes.

The other main inputs of the 74LS366 are connected to the 9-pin Atari socket but appear to have pull up resistors too. At this point, I’m speculating that the resistors are there to keep the inputs pulled high instead of floating; inputs on those pins pull them low, and these are then inverted by the 74LS366 to form the ---FULDR bit pattern we see on the Spectrum.


At this point, I started out with KiCad and mapping out the various bits I knew. With the datasheet at hand, I ended up with this:

Cheetah Kempston PCB Schematic.

I’m still learning all this stuff, so it may not be correct.

How I interpret all this is that:

  • 5 of the the 9-pin Atari inputs are connected to the 74LS366 via a series of pull up resistors
  • The 6th input of the 74LS366 is pulled high
  • The 74LS366 inverts the inputs, meaning D5 is always low
  • The 74LS366 enable/chip select is controlled by the 74LS138

If you remember from before, the 74LS138 enables exactly one output for a given combination. Only pin 15 is wired up, so we have to look at the conditions that will set that from the datasheet.

The datasheet says:

    A0, A1, A2 must be LOW
    E1, E2 must be LOW
    E3 must be HIGH

The A input pins are connected to A5, A6, A7 on the address bus, which we have already determined to be the pins we are interested in. This means that the interface will only be active when these lines are all low. I wonder if that means we can read the joystick from any port, as long as the A5-7 pins are low? Might be something worth testing.

The E1 & E2 pins are connected to ~RD and ~IOREQ, which we know are active low. This would mean that E3 (~M1) needs to remain high for the combination to trigger.

I will need to read more about the M1 line on the Z80 to fully understand it, but from a quick look it appears to be driven by the ULA. Whilst high (and with IOREQ active) it means we’re in a PIO routine; driving this low without RD/IOREQ indicates a reset of the PIO. This makes sense in this circuit, as it’s essentially saying “when in the middle of an READ IOREQ, activate the 74LS138”.

All of this means, in laymans terms:

When an READ IOREQ occurs on port $1F, activate the circuit which takes the joystick inputs and map them onto the data lines D0-D4.

Which is exactly what we expect to happen from using it!


This is quite an elegant little circuit that involves a couple of simple ICs and some simple, yet clever logic to achieve a joystick input.

It was a lot of fun to go through this and reverse engineer it myself. I’ve never done something like this before, so it as a huge learning experience for me overall.

To make sure I haven’t got my interpretation incorrect, I had a look at Old Machinery’s blog on the subject. I am pleased to say that it pretty much tallies up with that interpretation too.

Perhaps I’ll have a go at building this circuit myself on a breadboard :)

Until next time, thanks for reading!

Arduino and 4116 DRAM

Hardware Projects

First blog in a while; I’ve been working away slowly on SMEG but I’ve also been learning how to repair and restore the very hardware I’ve been emulating and writing software for - the humble ZX Spectrum 48K!

I’ll talk about some of the repairs another time, but I thought I’d take this opportunity to write about some of the experiments I’ve been performing with the Spectrum’s RAM.

Why RAM?

Of the two Spectrums I’ve repaired so far, they both have had a failure in one or more of the lower RAM chips (and then an overall failure of the DC/DC converter circuit). One part of the repair of these was to swap out all 8 of the lower 16K of RAM for a “modern” replacement daughterboard. This has left me with 16 chips just gathering dust.

Following the progress of a 4116 RAM tester by a chap called Stephen Vickers on the Spectrum For Everyone Facebook group really piqued my interest. I wondered if I could learn enough to interact with these RAM chips and then potentially verify if they are working or have shorted somewhere. I’m not interested in building a product for sale like Stephen is doing, but I am interested in the learning and practical application of that knowledge.

The ZX Spectrum’s RAM

The ZX Spectrum 48K has two banks of dynamic RAM; the “lower” 16K of RAM and the “upper” 32K. For this post I’m just going to ignore the “upper” RAM completely, and focus on the “lower” 16K of RAM.

The lower 16K of RAM on a ZX Spectrum consists of 8 x 4116 DRAM chips with a 150ns access time.

Each of these 8 chips supplies a single “bit” of a byte at the same memory address, each chip can hold 16kbits each - hence the 16kbytes storage. In practice, it means we have 8 chips all listening to the same addressing and access signals, but their input/outputs are tied to different data lines on the Spectrum’s ULA.

That this means though, is that if a single 4116 chip goes bad, it affects every byte of data in that lower 16K, thus making it impossible to “boot” the computer - depending on the failure of the chip, it can actually lead into a pathological failure of the power circuits, which in turn can cause more failures, and so on. At this point, a repair often mean desoldering all of these chips and replacing with known good, or a modern replacement board.

There are test kits out there (such as the ZX Diag Cart) that plug into your Spectrum and diagnose such faults in-situ; but if you’re in a position like me where you have 16 chips ona workbench with at least 2 faulty ones you can’t really use a cart to diagnose it.

At this point, I decided to learn about what these chips are and how they work. I thought it’d be a good experience to pick up some new electronics skills on the way - but also to help expand my knowledge about the internals of the ZX Spectrum too.

About the 4116 DRAM

The 4116 DRAM chip is a integrated circuit packaged as a 14-pin DIP.

4116 DRAM Pinout

It has 4 pins dedicated to power, 7 pins for addressing, 2 for data I/O and the final 3 are for controlling the chip.

The 4 power pins are a quirky curiosity of the lower RAM in the Spectrum that is not shared by the 4164 / TM4532 / MSM3732 DRAM chips used in the “upper” RAM. This chip requires a +12V, +5V and a -5V line, as well as GND. It is this mix of voltages that makes the chips prone to destroy themselves if they fall out of the tolerance ranges specified.

The RAM is addressed with a 7-bit row address and a 7-bit column address, giving 128 x 128 individual “cells” that can be accessed. Each cell has a small capacitance which stores the on/off state of the bit. This is how the memory holds 16kbits of data.

Internally, it is arranged as two “halves” with rows 0-63 in one side of the chip, and 64-127 held on the other side. This doesn’t materially impact how we interface with the chip, but it does explain some of the quirks of the failures I’ve seen when interacting with the RAM.

As the 4116 is dynamic RAM (aka DRAM) it means that every row must be refreshed every 2ms in order to retain the data; without this the capacitance will leak away and the bits won’t be as you expect them.

Powering up

The first problem to solve when wanting to interact with this RAM is how to power it on a breadboard. The real ZX Spectrum has a DC-DC converter circuit that can take the input voltage and output the +12V, +5V and -5V required by the 4116 DRAM; we could try and replicate this on the breadboard, but there’s a much simpler way using a couple of regulators and 7660 charge pumps.

The first problem to solve is getting that initial +12V from the typical +9V DC input. This can be done by using a 7660 charge pump in the following configuration:

7660 Doubler Circuit

This takes the +9V source and brings it to about +16V after some of the drop off. This is more than enough for our purposes - if we connect that to a 7812 voltage regulator we get a stable +12V. Similarly, the +5V is easy to obtain from either the +9V input, or the +12V line by pushing it through a 7805 voltage regulator.

We now have two of our voltages, what about the third - the -5V line?

This is described nicely on Peter Vis’ site; the solution is a second 7660 in a different configuration.

7660 Inverter Circuit

By connecting this to the +5V line off the 7805 regulator, we now get a stable -5V that we can use to power the 4116 chip.

Given we also have a +5V or +9V line we can use that to power the Arduino via the VIN pin (that goes into its internal regulator).

Success! Note - make sure you connect the +12V and -5V lines correctly, I fried a chip because I ran them to the wrong socket pins initially - whoops!

Operating the 4116 DRAM

Now we have power we can get to work understanding what is needed to use the chip itself.

The steps for interacting with the chip are broadly:

  1. Begin the cycle by ensuring that ~RAS and ~CAS are held high; this generally needs to be for at least the RAS recharge period as documented in the datasheet.
  2. Present the row address to the A0-6 pins, pull ~RAS low for the required amount of time
  3. The configuration of the Write Enable (~WE) pin at this point will determine if the operation is a read or a write
  4. If we are writing (~WE is low), then we should ensure that the data is set on the Data In (D) pin
  5. Present the column address to A0-6 and then pull ~CAS low for the required amount of time
  6. If the operation is a read (~WE high), then the data will be available on the Data Out (Q) pin

You’ll see here then that the access address is multiplexed into the address lines. The ZX Spectrum does this by presenting the lower 7 bits of the address first and then the upper 7 bits, thus giving us a 14 bit address (16,384 bits total).

Arduino and 4116

With this in mind, I wired up an Arduino to the various pins and started hacking some code.

Arduino Pinout for 4116 DRAM

The first “issue” I had was that the Arduino generally works in the order of microseconds, whereas the DRAM is working in the nanosecond scale.

Arduino has a minimum “tick” time of a single clock cycle; to work in orders of nanoseconds (I believe 100ns is the lowest) we burn clock cycles.

To resolve this I used a library by Hans-Juergen Heinrichs that generally gave me what I needed.

The second issue I faced was that I often hit bad reads; after asking on Twitter, I was pointed in the right direction by @leelegionsmith and @breakintoprog. I was using digitalRead / digitalWrite from the Arduino library, but they are pretty slow - especially when you need to set a collection of pins at the same time.

The solution was to scrap the friendly helpers and hit up Port Manipulation directly. With this I was able to set all of the address pins in two port writes instead of many individual outputs.

I also made the mistake of using a couple of the “pure analog” pins on the Arduino nano that didn’t exist on the Uno.

This resolved my issue and I was able to start writing bit patterns into the RAM and reading it back out. I’d coded up some simple routines to support the access modes of the RAM. The first two are the classic ‘random’ read/write; the second is the feature that the ZX Spectrum’s ULA relies on a lot for the video RAM - “paged mode”.

The difference between these two modes is really down to the use of ~RAS; you keep it low and then strobe your column addresses & ~CAS signals. This has the effect of being able to read data within the same row much faster than the random I/O. You’re bypassing the extra row address selects and don’t take the hit of the ~RAS recharge time.

Testing RAM

One fun thing here was that I was quickly able to identify “dead” RAM and event deduce the type of failure the RAM has suffered. Some of the RAM just completely failed to be written to; reading it returned the “uninitialized” bit pattern you get when you read new RAM. Other RAM had random bit failures - others had a clear “order” to the failure.

Perhaps I’ll wire all this up into a tester board for myself; perhaps using a Veroboard so it’s a little more permanent than a breadboard!

Learning KiCad

One final thing I did was to start learning how to use KiCad to document the circuits I’ve been building. So here’s the current circuit that’s on the board. There’s probably a few improvements I could make - so feel free to point them out!

Arduino - 4116 DRAM Circuit

So what next?

I moved from here to another ZX Spectrum RAM-related project. Stay tuned for a write up about that soon.

Introduction to SMEG

Introducing SMEG

In this post I’m going to introduce the SMEG adventure game system that I’m creating to build my ZX Spectrum point and click adventure.

What is SMEG?

SMEG stands for Scriptable MachinE for adventure Games, a very tenuous play on words that pays homage to Lucas Arts’ SCUMM and one of my favourite TV shows, Red Dwarf.

Rather than just being a fun acronym, it nicely describes the approach and the architecture of the system.

An overview of SCUMM

Why are you talking about SCUMM? I’m here to learn about SMEG!

To understand SMEG, it is worth getting familiar with SCUMM and what it was about.

Let’s start at Wikipedia for a hand. The original version of SCUMM was created by the legendary Ron Gilbert for the game Maniac Mansion.

It’s entire purpose was to provide an abstraction from the machine and the content, allowing people to use commands like walk bob to door instead of having to know that the character bob was memory location $6A00, the door object was at position 130,90 and the walk command involved playing animation, path finding and moving the character between frames.

As mentioned in the Wikipedia article, the system is somewhere between a game engine and a programming language. They exist in a symbiosis that’s balanced around the creation of point and click adventure games.

For a system that was designed and built in 1987, it was very advanced - and is still a very elegant way of approaching the adventure game genre. The idea still persists today in the form of Adventure Game Studio and other similar tools.

Ask me about SMEG

SMEG is my attempt at creating such a system for the ZX Spectrum. It started out as me messing about and making a simple crude Monkey Island-esque demo for the Speccy and it has morphed and blown up into my making SMEG (along with a companion game demo).

I didn’t start intending to build a SCUMM style system, moreover it started creating itself based on the complexity and requirements of building the content in Z80 assembler.

Previously, everything was hard-coded into the demo and I found that adding a new dialog, a new character, a new object, etc became quite a tedious and error prone task. The more items I added, it made it harder and harder to change the data structures and code that used those without breaking things.

A very clear example that stands out to me was adding something as simple as a status flag to the “stage object” structure caused a load of subtle bugs and even crashes when the code worked on that data. Remember that in Z80 assembler we have no type checking, and often accessing the data can mean incrementing the HL register, or some other offset-based lookups. Things broke a lot and I found that I was spending more time fixing existing things after a change that I stopped adding new things for a while.

The SMEG “engine”

SMEG Screenshot

The SMEG engine is currently designed around several core concepts:

  • Stage - A “room”, comprising of actors and props. The background of the room is defined as a tilemap.
  • Actor - A walking, talking being
  • Prop - An interactable object on the stage. It may or not be visible
  • Ego - the player’s actor
  • Inventory - A collection of objects in the ‘pocket’ of the Ego
  • Verbs & Sentences - Verbs are the ‘actions’ that the Ego can perform (Look, Walk, Take, Talk). These are combined with Nouns (such as Actors, Props, Inventory objects) to do something.
  • Dialog - Conversation system
  • Script - A virtual machine that ticks away and schedules the next sentence for execution.

The SMEG engine handles the drawing of the room, the sprites and all of the cursor interactions with the world. The beating heart of the system is a very simple bytecode-based virtual machine that runs the script actions, such as an actor saying a dialog line or walking to a position.

The “verbs” I have chosen are the standard 9 that is used by most SCUMM games, but in reality they may drop to 6, with things like “Use”, “Pull”, “Push” being largely the same, for example.

Verbs are important as they’re the basis for the “things you can do” in the world.

SMEG overview

SMEG Overview

The SMEG System has three phases a project goes through to turn into the end “game” that you can run on your Spectrum.

  1. Content Creation
  2. Content Build
  3. Compilation

These phases are as it is now, and are very likely to change as time goes on - especially the final stage. More on that later.

Content Creation

Content creation is the “fun” part, where I can build scenes, write scripts and generally build the stuff you see and interact with.

Making a room

I’ve adopted the open source map editor Tiled to build the rooms. It’s really easy to use and has a format that is easy to parse and process by my tools.

As you can see in the screenshot above, the room background is made up of a tilemap, with Tiled’s object system providing the basis for how you specify objects and their scripts in the room.

I have a convention-based approach of assigning the verbs to the objects, using the Tiled object property system to hold a lot of the data.

As you can see in the screenshot, there is a rudimentary scripting language (SMEG Script) that lets you direct what happens when a sentence is run.

    walkTo <actor> <position>
    pickUp <actor> <object>
    speech {
        line <actor> <text>

The language is currently based around Tcl, but will likely move more towards a simple C syntax as I feel most comfortable with that.

Anything that lives beyond the scope of a room lives in a SMEG Project file, a simple json file that has information about the actors, the sprites and the rooms.

Dialog and speech currently lives in the Tiled map files, but it is very likely that they’ll move into a separate set of files soon - mostly because it feels the wrong way to be authoring them.

Content Build

The content “build” stage is what takes all of the content files and turns it into something that the SMEG engine can actually use. This stage is captured by a single custom tool called the SMEG Build Tool.

This is a .NET Core project that presently, at least, is a crude Z80 code generator. In essence, this was the utility I created to make it easier for me to get content into the game, without having to write the Z80 structures.

Originally a single-pass emitter, it has gradually moved into requiring two passes over the assets, mostly because it can then perform lookups and validation on those objects.

This is handy as it allows me to move a lot of the validation away from the runtime (where it would be slow) and into this tool - where I can do it much more easily and across the entire project.

The SMEG Build Tool is also the SMEG Script compiler that turns the human readable instructions into bytecode that the SMEG VM can execute.

The Z80 emitted by this tool contains everything the game needs; the sprites, the dialog, the bytecode, the object definitions, the tile maps - everything.


The content file emitted is compiled with the engine source by SJAsmPlus into the final SNA/TAP file loadable by the Spectrum.

The reason for them both being compiled together is historical; the code and the content were originally together. The SMEG Build made it easier for me to work on the content, but I still needed to see the assembler to help with debugging.

One thing that I will be moving away from is the content being compiled with the engine. There’s many benefits to this; namely that I can compress rooms to get more into the Spectrum’s limited memory and to allow multiload content, effectively allowing for much bigger games. This is all something that needs to be looked at, but it is really the best direction to be taking.

This likely means that the final compilation stage will turn into one of ‘mastering’, taking the compiled engine code and game data and laying them out in a form that can be loaded by the Spectrum.

The Future

Everything is still very early, but it feels like the foundations are taking shape. I’m building a game demo with SMEG and using that to drive the features and pipelines. I’m not really setting out with a goal in mind, just happy to let things evolve and then refine them as we go on.

When I’m at the stage of being “happy” with where things are, likely near the release of the game demo, I am planning to open source all of this for others to enhance and make their own content with. Open sourcing it now just isn’t the right time, especially as everything’s changing and in flux. The code is also pretty nasty, too :)


Some questions I have been asked, that I will answer here:

Which Spectrum models are you aiming for?

I’m currently working with 48K, because I’ve not added 128K support to my development emulator (neccy). I’d love to keep things within reach of the 48K models, but I’m not against changing this to 128K only if it becomes too constrained.

Are you targeting the NEXT/ULA Plus/etc?

Not yet. I don’t have either of these systems and trying to target them will mean forking the code - for the NEXT it means that much of the display code will need reworking. I would very much love to target the NEXT in the future, but first I’m focussed on the original model Spectrum.

What about Kempston Mouse support?

Will probably add this, but I don’t have a real mouse to test it on.

What about music support?

Likely to be added in the future. I may need some help here.

Why a new engine instead of porting SCUMM VM?

I wanted to make my own game for the ZX Spectrum, I never really set out to build a SCUMM like system, it just evolved that way.

When are you going to open source this?

When I’m ready.

When can I play the SMEG game demo?

When it’s ready.

Will there be a port for the C64/Acorn BBC/etc?

No idea. I’d love to consider something like this in the future, but right now everything is designed specifically for the Spectrum, so it may not ever mean content is “portable”.

Making a game for the ZX Spectrum in 2020

Making a game for the ZX Spectrum in 2020

One of the aims for me creating my Spectrum emulator neccy (short for not a speccy) was so I could learn the ins-and-outs of the Spectrum - something I never did when I was using this machine back in the 80’s.

“What better way to learn it”, I thought, “than to create an emulator?”. As it stands, writing an emulator for a 35+ year old machine is really only part of the puzzle. I liken it to understanding all of the notes and function of an instrument but then being absolutely clueless when it comes to playing a tune.

I am a person who learns through having a challenge; I simply can’t sit and read tutorials and copy/paste examples and expect any of it to stick. I have to have something that I believe in, something to motivate me. I also have to be thrown in at the absolute deep end and have to figure it out.

Armbands included

One thing about coding for an old machine in 2020 is that there are still folks out there who are programming for, blogging about and playing games on such systems. I dipped my toe into the retro gaming waters on Twitter and found a whole host of helpful folks and resources.

Whilst one may be in the deep end of programming for the Spectrum, there’s definitely some armbands to help you say afloat. Two such resources have come from Dean Belfield and Jonathan Cauldwell, both of whom were coders for the Spectrum back in the day and are still active today.

One thing I will say, however, is that much of the information out there already pretty much starts on/near the “end state” for Spectrum development, immediately talking about sprite shifters, pre-shifted sprites - self-modifying scroll routines, often with very few visuals. It’s all very cool stuff and hugely impressive - but it’s several steps ahead of where I am in my own personal journey. As such, there’s a gap in the market for the very basic detail on some of the techniques. Maybe one day I’ll feel motivated to type up my notes that I’ve been making whilst I’ve learned from the masters.

Peer Pressure

Anyway - I had to decide on a project. What was I going to make?

Pong? That should be ok as a starter project right?

How about Tetris? I even started it!

Falling block

Everything changed one evening during a little bit of banter with a couple of retro folks on Twitter.

At the time, I was mucking about with a really simple Point & Click Adventure game engine in C-like C++ using the olcPixelGameEngine.

olc Powered Point & Click

I was part way through creating a simple SCUMM-like scripting VM and had some of the basic interactions in place.

And then I chimed into a Twitter thread.

A healthy dose of peer pressure from @SpectrumNez and @BreakIntoProgram and it pretty much gave me one of those irritating ideas that I couldn’t shake.

So you’re making a Point & Click game - for the Spectrum?

I’m not sure where it’ll end up, but yes - it seems that way.

It took a relatively short space of time to get a basic joystick-powered “verb” menu up.

Verb Menu

I was seeing progress - and I had a purpose, let’s keep going!

Once I had verbs, I needed something to do with them. So I put together a simple inventory.

Inventory Menu

This made me think about how to handle all the controls and areas of the screen. We don’t have a mouse on the Spectrum, so I decided to make this joystick-controlled. You can move between the verbs, then press fire to action it - from there the intention is that you can switch between the inventory and the “stage”.

All of this may change in favour of the floating “crosshair” type cursor, but for now I’m happy with how this works.

Now I could select verbs and navigate the inventory, I decided it was worth thinking about how verbs and inventory items interact.

At this stage, I have a simple item description:

    item_table: [ list of item offsets ]
    item_1: {
        verb_table: [ ... ]
    item_2: {....}

Using this, I could start putting dialog descriptions with the items associated with a “Look At” verb.

In Z80 it looks like this:

	.db 8,"Powder"		; item label
	.db 1			    ; action_count
	.db VERB_LOOK		; actions[action_count]
	.dw dialog_powder_look

I’m using two types of strings in the demo at the moment; one which is zero terminated, the other which is length prefixed. The length prefixed ones let me jump past the strings very quickly, but this is only effective if they’re inlined. In future I may scrap this for pointers to zero-terminated strings. But it’s fine for now.

Inventory Item Descriptions

It’s all very rudimentary but it’s quite nice to be able to start interacting with the system. It’s also brought together a few things; displaying text, timers and basic user interaction.

With that in place, I’ve spent some time learning how Sprites work on the Spectrum. In short, they’re annoying - and thanks to @BreakIntoProgram it’s made understanding things a lot easier. One day I’ll have a go at writing my own step-by-step technical breakdown on the subject, mostly scribing my notes into a blog for you to digest.

With some rough code to handle sprites in place, I needed something to actually show. Using AESprite, I cobbled together a 16x48 character that (depending how hard you squint) may or may not resemble a certain iconic pirate who visited a place surrounded by water that is inhabited by primates. Don’t worry, I’ll change it.

Talking sprite

What next?

Seeing the animated sprite saying a dialog line made it all feel very “real”, inspiring me to continue with the demo.

The way I see it, there’s three logical places forward:

  1. Implement a dialog system and a conversation between two on-screen characters
  2. Implement player interaction with the stage; pick up item, use item.
  3. Implement the ability to move your character around the “stage” area, swapping between screens.

There’s probably more, but those three points feel like the next natural steps - so that’s what I’ll do!

With this in mind, I need to start thinking about a way to structure this game data better; we need a system that can describe scenes, items, characters and their various interactions. SCUMM did a good job of this; perhaps it’s a model I can explore along with more modern ideas.

Either way, I need to be mindful of the constraints in place when developing for the Spectrum.

I think the next few days I’ll scribble down a very small plot for a demo that involves a few different things. It won’t be a full game, but it’ll be enough of a demo to decide where we go from there.

neccy - Toy 48k Emulator Feb 2020 roundup

Hello again, neccy!

It’s been a while since I posted about neccy, my toy 48k ZX Spectrum emulator. It’s probably a good idea to remind ourselves where we were.

My next milestone is to get Sinclair BASIC working. After that, who knows - I’ll probably try and go for sound and then onto trying to run a game.

It’s a long journey ahead.

And here’s how it looked:

The Copyright Message

And this is what things look like today

neccy @ Feb 2020

A lot has changed since I last talked about this project!

Sinclair BASIC

One of the things I got working pretty soon after the last post was Sinclair BASIC. I had a few bugs in the Z80 emulation that caused basic to go wonky. The main thing that I got snagged on was that the offset to the IX/IY register is signed. I was treating it as unsigned, so in my emulation the offset was always positive.

I also had issues in the IO Read routines; I was assuming that no input was a zero byte (0x00), but in the Spectrum the value for unset is 0xFF. Somewhat counter-intuitive to those that are used to a 0 bit being “unset” and a 1 bit being “set”. It makes sense when you get closer to the hardware.

Speaking of hardware, I bought The ZX Spectrum ULA by Chris Smith - it’s a fantastic deconstruction of the ZX Spectrum hardware and has been invaluable for understanding many of the internal details of the system.

With all that working, Sinclair BASIC was up and running!

One thing that stood out immediately, however, was that the keyboard mapping is so very different to how modern computers work. This is true on the 128K Spectrum, but on the 48K model it’s even more extreme. BASIC commands are entered using a sequence of modifiers and keys that emit a full command.

As a result, programming on the spectrum using my laptop feels very cumbersome.

There’s a few QOL things I could do, such as supporting macros that would emulate the key sequences for backspace, but I haven’t done those yet.

It’s worth noting that pretty much all emulators suffer from this, as it’s down to the underlying system and not the emulators themselves. Either way, my career as a Sinclair BASIC programmer is very slow off the ground!

SNA loading & Z80 speed

Soon after getting BASIC up and running, I fel confident and added the ability to load a SNA file. These files are pretty simple, being essentially a copy of the registers and a dump of the RAM.

I was elated to see that Horace Goes Skiing just worked. There were no issues to speak of and I could play the game (badly) on my keyboard.

Horace on neccy

Shortly after, I got the CPU emulation running at full speed. I am running at exactly 3.5Mhz, so not quite the speed of a real speccy, but its good enough.

This threw up a few timing issues that caused me to implement clock timings properly. On the Z80, an instruction takes several cycles (T-States) to execute, for neccy I execute the full instruction at the first state and then basically do nothing for the rest. This is far from being perfect, but it’s enough for now. At some point in time I want to go back and emulate the Z80 with cycle accuracy, but that’s another story/project.

Using the SNA loading and improved timings I started trying to load some games. Pretty much all of them crashed, had glitches or relied on Z80 instructions I’d not implemented yet. I got so far as getting JetPac running, but was unplayable due to bugs.

I also added Kempston Joystick emulation at this stage as it was easy to do and allowed me to play games using my cursor keys.

Sound’s awful

It’s true, neccy does sound awful. It’s also true that I’ve had an awful time trying to get sound working.

As a bit of fun, I thought I’d add support for the 48K’s “beeper”, a one-bit audio signal. Easy, right?

neccy beeper signal

I began by following javidx9’s audio synth series on YouTube and plugging in the extension to the olcPixelGameEngine. It was at this point that I realised I know nothing about audio programming. Every sound that came out of the sound code was nothing but pop and crackles. It sounded terrible.

The audio extension to the olcPixelGameEngine calls my code back at a regular interval (I was using 22050Hz), essentially asking for a sample value at this point (-1.0f to +1.0f). I tried various things but nothing I did resulting in anything that even resembled the audio I was expecting.

Days passed with various aborted attempts at getting it to work and I just gave up and moved onto something else.

Goodbye olcPixelGameEngine

… well, sort of.

At this point I decided to move away from olcPixelGameEngine in favour of SDL2. There was one, sepcific reason to this, and it’s because I wanted to use dear ImGui. My hand-rolled GUI looked super retro and had a charm to it, but I kept finding it frustrating to work with when doing something new. I decided that I needed a new GUI system, and I didn’t want to write one.

I had a brief look at geting dear ImGui working with the olcPixelGameEngine but I decided that SDL2 would be better for me and would allow me to use SDL’s audio capabilities as well.

The thing with olcPixelGameEngine is that I really like the simplicity of it. It’s easy to work with; the abstractions over the various bits you need to do are intuitive and easy to work with - so I wanted to keep it.

What I ended up doing was to remove the olcPixelGameEngine implementation code, but maintain its interfaces - or at least the bits I was using. I ended up with a lightweight SDL2 SDLPixelGameEngine; I even kept the olc namespaces.

Unfortunately I lost a lot of the useful bits of code from PGE, such as the text output to a sprite, but I was going to move onto using dear ImGui so it wasn’t a huge loss.

dear ImGui

Armed with an SDL2 powered codebase, I was free to integrate dear ImGui into my code. A couple of issues cropped up, mostly around integrating the rendering systems but I managed to work around those pretty easily in the end.

The particular issue I had was having the neccy display render into a dear ImGui window. I was rendering to an SDL_Surface and had to figure out how to get dear ImGui to show it. In the end, I had to make sure the the SDL_Surface was associated with an OpenGL texture and hat I was refreshing that texture when things changed. After that, I could use ImGui::Image to show it.

Moving to dear ImGui gave me immediate benefits. I could have separate (movable) windows for all the things I needed debug displays of. The Z80 state, the disassembly window, the audio (beeper) signal and a bunch of other stuff.

dear neccy

Suddenly, adding a new debug window because effortless and made working on the emulator fun (again).

A huge boost came in the way that the author of dear ImGui had already built and released a memory viewer/editor for the library. It took minutes to integrate and offered me something far better than I had.

Z80 bugs everywhere

My Z80 had a boatload of bugs in it, here’s some examples of games that are buggy.

Buggy Manic Miner on neccy

Buggy Jetpac on neccy

Jetpac was bugged up; it’s there - but not quite. Lots of issues.

Buggy Kong on neccy

I really needed to iron out these bugs, so my attention turned to zexdoc/zexall.

zexdoc (and zexall) is a program that you run on Z80 powered machine. It essentially runs a ‘family’ of instructions and generates a Crc32 of the output (registers, flags & whatnot). At the end of each family it compares the result to a known Crc32 obtained from a real Z80-based computer. I needed to run zexdoc, but all I could find were TAP files of it for a spectrum and the original CP/M-based program.

TAP loading

As naive as I was, I opted for the TAP version of zexdoc (more on this later).

I set about figuring out how TAP files worked. Luckily, it’s quite simple and well documented.

Loading from TAP format is essentially simulating the pulse signals that a real tape would play to the Spectrum’s audio input. The loading is performed by the ROM itself.


Just loading the TAP format teased out several Z80 bugs. Finally, I got zexdoc running and… lots of tests failed.

zexdoc errors

zexdoc / zexall

zexdoc takes ages to run, so I added the ability to overclock the emulator. This proved handy as it let me run things faster, getting to the errors quicker.

Now began what I can only refer to as a slow grind. Finding errors and fixing them, one by one. Some errors literally made no sense. I looked at specs, undocumented opcode details, even other emulator source and couldn’t find what was wrong in some of the basic instructions.

Finally, I gave in and ran zexdoc on Fuse and found out we had the same set of errors!

Fuse zexdoc errors

I was both pleased but frustrated. I still had bugs - games still had issues, so where were they? I couldn’t rely on zexdoc to help me anymore, as many of the tests ‘failed’ but ‘passed’ in that they matched an established emulator. I needed a way to see the wood for the trees.

floooh to the rescue

I found a blog post by a chap called Andre Weissflog, that explained how he got zexdoc running in his rust Z80 emulation. Rather than using the Spectrum version of zexdoc, he was using the original CP/M version. Luckily for us, zexdoc relies on two CP/M OS calls - and these are very simple text output calls that we can trap and use to grab the output.

Better yet, the CP/M version of zexdoc run on the ‘naked’ Z80 and need nothing but an attached amount of emulated RAM. I was able to get a test up and running without having to simulate anything else of the Spectrum - no ULA, no screen output, nothing. As I didn’t care about timings either, I could make my CPU run at max speed, even skipping the emulated clock ‘wait’ timings I had to put in to get things running at comparable cycle timings to a real Z80.

Armed with this, I was able to rip through and fix a raft of Z80 bugs pretty quickly. It even forced me to implement a bunch of undocumented Z80 instructions and handle the flags correctly (including bit 3 and 5, the X/Y flag).

Current state of the Z80

I’ve fixed all but one failing zexdoc test:

ld <bcdexya>,<bcdexya>........  ERROR **** crc expected:478ba36b found:8088c9d9

zexall fails 5 tests:

bit n,(<ix,iy>+1).............  ERROR **** crc expected:83534ee1 found:9581a6ba
bit n,<b,c,d,e,h,l,(hl),a>....  ERROR **** crc expected:5e020e98 found:e6624aeb
ld <bcdexya>,<bcdexya>........  ERROR **** crc expected:478ba36b found:8088c9d9
ldd<r> (2)....................  ERROR **** crc expected:39dd3de1 found:405ca1c1
ldi<r> (1)....................  ERROR **** crc expected:f782b0d1 found:f531e964

I have spent hours - no, days - hunting down the ld bug and have had no joy. Out of them all, that’s the one that’s killing me most.

Current state of neccy

Games like Jetpac play now (even though I suck at it):

I suck at Jetpac

Many games, such as Uridium and Return of the Jedi either outright crash or never get past a specific point, like something doesn’t happen that they game expects.

Uridium crashes

I think that the issues are either in the Z80 itself (the ld bug?) or are down to expectations on hardware timings that I simply don’t handle yet.

For example, my emulation does all the Z80 work in one tick and lies dormant for the rest; as a result lots of data movement around the bus happens instantly instead of over a series of cycles. I don’t emulate the memory contention of the ULA in the ‘slow RAM’. I don’t emulate the ULA fetching bytes of screen RAM. I don’t emulate any floating bus values that many games use for timing to screen syncs. I’m convinced that my interrupt handling has bugs or quirks. My beeper audio is better, but still has issues because I still have variable frame times. I haven’t even looked at 128K stuff, like banked RAM or AY sound chips. Hell, I don’t even support TZX yet.

I still have a lot to do on this project.

With that in mind I decided to take a break from neccy for a bit. It was mostly down to complete burnout trying to trace the Z80 bugs. The ld bug in particular became days of grind that ended up in an all out death crawl - it completely killed the joy of working on this project. At least, for a bit it did.

Real hardware

Fortunately at the time of this Z80 emulation fatigue, a good friend of mine turned up one day with two Spectrums he found in his loft. He knew about my emulator project and asked if I wanted them - of course!

With this, I decided to get them restored by the fantastic Mutant Caterpillar - Ian & Alex did a fantastic job on the restoration, even doing the various magic mods that allows the Spectrums to work better with modern TVS. I bought a Zipstick and a divmmc future to enable loading from SD cards.

Now I can play Uridum!

Next steps

One of the reasons for me starting neccy was to have a pop at an emulator; but the other - more personal reason is that I wanted to have a go at making games for a system I owned when I was a kid. That was true back when I started it, and it’s even more true now.

I’ve been following Dean Belfield’s recent Spectrum coding and he’s really inspired me to have a go.

The next steps for me are exploring the various options for coding on a 30 year old computer in 2020.

See you next time. I’ll try not to leave it so long ;)

Hello, neccy

Hello, neccy!

It’s been a (long) while since I blogged. I thought I’d break the silence by talking about a fun little project I started.

I was working on a 2d physics based game and was getting frustrated by my own lack of maths knowledge. One night I watched One Lone Coder’s video series on building a NES emulator and it triggered a something of a nostalgia vibe in me.

Growing up, we never had a game console - instead my folks bought a ZX Spectrum +3. Although I was about 8 at the time, having a strange typewriter hooked up to the TV was fascinating to me. I eventually learned Sinclair BASIC and my programming life was kicked off.

The OLC NES emulator videos sparked off a very silly little idea - “I’m going to write a spectrum emulator!”.

Not a Speccy

I needed a name for my new git repository - “Not a Speccy”, thus “neccy” was born.

First Shot

I’m coding neccy in C++ and am using the olcPixelGameEngine for the basic framework. I picked it as it seemed to be a nice little single header library and it alls all the tedious “new project” guff out of the way. I’ve not felt the need to move from it yet, so I will likely stick with it until something changes. Thanks to David (javidx9) for this little library, it has been very helpful.

Writing a Spectrum emulator naturally involves emulating the CPU, a Zilog Z80 processor. For an 8-bit processor there are actually quite a few instructions - especially if you try and emulate all the undocumented ones.

My emulation of the Z80 has been re-written 3 times so far on this project. I first tried to code-generate it from C#, which ended up in a bit of a disaster. The second variant I threw away as I was making too many assumptions and tried to over-generalise the ALU logic. This current version is “ok”, but needs some cleanup.

My emulation is very crude at present. I’m not simulating the various T & M states in the CPU, instead choosing to do all the fetch, decode and operation code at once and doing nothing for the rest of the cycles. In future this may change, but it’ll do for now.

Emulation is tough

If I’d realised how tough it was to write an emulator, I probably wouldn’t have started the project. However now I have, I’m pretty hooked on it.

For starters, you have to learn a lot about the machine you’re emulating. You have to understand the guts of the CPU, the memory maps used, how the screen is rendered, the various IO port mappings to handle peripherals; then there’s all the timings and everything else to get stuff working.

When I started the project, I realised I’d need to visualise what was going on. So the very first thing I ended up writing was actually a memory viewer and disassembler.


From here, I can see the state of the CPU and look at the program that is being executed. I started out writing my own little test assembler routines, but realised very quickly I needed something “real”. From here, I started to try and run the 48k Spectrum ROM.

I picked the 48k ROM as it’d be simpler than the later 128k models of the Spectrum, which had memory paging and better sound - if I ever get this thing “working” I’d probably try and progress to the 128k model Spectrum - but let’s not run before we can walk.

Running the ROM

I ended up deciding to implement the parts of the emulator as I went; adding the instructions as I needed them in order to run the ROM.

The this point I realised that I had to start actually understanding the programs I was running, which meant actually being able to read the disassembly of the ROM itself. Thankfully Skoolkit has a great 48K ROM disassembly, which has helped immensely.

When got to the point of the ROM that was clearing the screen, I decided to figure out how the Spectrum video memory worked.

This was “fun”, but resources like Break into Program and Overtaken by Events were a huge help.

Eventually, I was able to load a scr format file of my favourite game, Uridium.

Uridium Loading Screen

This was a very exciting milestone in the journey and the first time that I felt I was doing something right.

Dark Days

After this I had the next milestone in sight - I wanted to see the iconic copyright message text.

It turns out this was actually a while off; mostly due to lots and lots of little bugs and glitches in my Z80 emulation. Little things catch you out here - not setting the flags correctly, not treating 16 bit numbers with the correct endianness (which caused a stack stomp) forgetting that jump relative instructions need a signed operand and the one that took me far to long to figure out, a bug where I was treating LD BC,(NN) like LD BC,NN. Essentially operating on a pointer value instead of following the indirection.

Finally after lots of swearing and frustration I got to the milestone I set.

The Copyright Message

Current Status

I’m currently implementing the Keyboard IO with the aim that I can try and use Sinclair BASIC for the first time.

As you can see though, there’s a few glitches…

Input Glitches

One thing about writing an emulator is that it’s a case of fixing issue after issue, glitch after glitch… It’s frustrating at times (ok most of the time), but very satisfying when stuff starts to work.

My next milestone is to get Sinclair BASIC working. After that, who knows - I’ll probably try and go for sound and then onto trying to run a game.

It’s a long journey ahead.

OldCode (Manta-X) July 2017 Roundup

July 2017 Roundup

Wow, it’s been a month already? I’ve not written an update for a while, but it doesn’t mean that activity has stopped on the OldCode / Manta-X project.

Name Change

The last thing I did in July was to change the “internal” branding (namespaces, project names, etc) from “Manta-X” to “OldCode”.

Manta-X was always a code name, and one from the original days of the project (pre-rediscovery & updates, I may add). As a result, it feels ‘old’ in a way that I can’t really relate to anymore. I no longer remember the original premise or idea of the game, and there’s very little left that resembles the excavated piece.

I’ve come to associate this project as being “Old Code”, even though it’s now very different to it was when I unearthed it.

July Recap

Let’s go through the changelog and recap on the main areas of work last month…

Bye Bye 3D

At the start of the month I followed up on my promise and swapped over completely to a sprite-based approach using SDL_Texture. As a result, almost everything ‘3D’ was removed from the game. Models, vertices, OpenGL, the lot.

It took 8 checkins over 2 days to remove 3D and swap to sprites. Not bad, really.

Math Types

I changed how I do my basic ‘Vector’ stuff; removing the other Vertex class and adding templated Vector2<T>, Vector3<T> and Rectangle<T> classes. These are much simpler to work with and can be int or float-based (or any other type, really).

I also trashed my Matrix class as it wasn’t used; when I need matrix maths I’ll create a new one.


Now I’m committed to SDL2, I removed the Win32 filesystem I added to replace PhysFS and replaced it with one based on SDL_RWops. It was crazily simple to do.


I created an eventing system as a standalone git repo hosted (privately, for now) on bitbucket. I integrated this into OldCode and hooked up basic GameObjectSpawned events. It was nice as it let me immediately start decoupling code that was interested in entity spawning.

Level Loading

A big change I made was to make the game initialize entirely from data using a json-based level file. This was great as it allowed me to remove some hard coded stuff from the C++ code.

A result of doing this meant that I re-introduced a archetype or class based system that allowed an entity type to be specified in a json file and then spawned from the class type.

Game Services

I started to bring in the notion of game ‘services’ to work with entities that have specific components. This allowed me to move logic out of components and into services, meaning that my components are now totally data-only.

A result of this is that I now how a generic Ticker system that ticks things that are registered with it.

Game Services aren’t really generic yet, but I can see them evolving that way.

GameObject System

A huge change I made was to remove std::shared_ptr use in my GameObjects. I now have a GameObjectHandle that is basically a weak reference handle to an object. The handle is exchanged for a GameObject by the GameObjectService. By the handle being weak, I can basically despawn stuff and have handles invalidate automatically. The implementation of this is a post in its own right.

Dynamic Spawning

As a result of all this stuff, I changed the way spawning of GameObjects works; objects are created and then spawned - at which point they’re renderable, updatable, etc. Crucially, they can also be despawned now - which means they’re removed from any systems that care about entities. If an object is despawned, any handles pointing to it become invalid, meaning that systems that are interested in them have to discard the handle. Of course, they receive a GameObjectDespawnedEvent to give them an instant notification of this.

Because I can now spawn and despawn entities, I added WeaponFire, which is the player firing bullets. They travel until they exit the level bounds and despawn.

Next Month

That sums up the stuff I did in July (and the start of August). My next goals are:

  • Get collisions implemented. This would let me detect weapon collisions with enemies (and the player) and react to that. The beginnings of a real game will drop out of this.
  • Unit tests. I’ve gone on too long without automated testing and have made some stupid mistakes that tests would pick up. So I’m going to integrate my upptest project and add unit tests around the core systems.
  • Add multiple enemies. Let’s start making a game, not a tech demo.

Until next time.

Manta-X: 2d or bust!

Why 3D is bad for me

The goal of this project was always to make a game. Specifically, to make a top-down, side scrolling shump type of game. The previous old code entries talked about ripping out superflous stuff and pulling the project back to the essence of what 2004 me was trying to achieve.

I started looking into modernizing the codebase to a newer version of OpenGL, one that moves away from immediate mode and into the realm of shaders. I created a new branch called opengl-upd and set to work, ripping out the current stuff and adding in magical things that I was basically learning as I went from a ‘modern’ OpenGL tutorial.

It turns out that for me, someone who hasn’t touched graphics programming of any kind since the late 2000’s the leap is huge. Rendering was broken for a very long time in that branch. Getting a triangle up took a while and I eventually got the (untextured) models loaded and showing - however the camera system was totally screwed.

As part of this update, I realised I’d broken one of my core tenants in this project - that of refactoring - keeping the existing stuff working as I evolved the code. The old rendering system just wasn’t compatible with the newer code so I had a black screen for a long time - something which wasn’t desirable.

Sitting back and reflecting on this I realised that I was spending a lot of time learning OpenGL and not actually progressing with the original goal of having a working game. This was compounded by my lack of 3d tools knowledge (MilkShape 3D is defunct now) and I realised I’d have to learn Blender and other tools. It got me thinking, if I was doing all this - I could just quit and use Unreal or Unity as it has all the 3D features I’d want, for free.

Stripping it back

With this in mind I decided to abandon the OpenGL update branch and go back to something more basic. I want to get some simple game built here, not mess about with tech and tools forever.

Not only am I abandoning ‘modern’ OpenGL, but I’m going to abandon OpenGL entirely in favour of a sprite-based system using SDL2.

Swapping over to sprites allows me to crudely draw my own graphics and render them up relatively quickly, allowing me to focus on building out the game itself instead of worrying about graphics tech.

Implementation Plan

The goal here is to be able to rip out the 3D stuff and swap to sprites without breaking what’s there - thankfully there’s actually very little there right now.

I could approach this in two ways; the first would be to have a combined approach, whereby sprites are rendered alongside the 3D stuff. The second would be to keep them completely separate. I decided that for my own sanity, I’m going to keep the concepts totally separate - I’ll keep the 3d stuff running until the sprite systems are at parity, then I can remove the 3d stuff entirely.

To do this, I’ve implemented a runtime toggle - basically when the game starts up it reads a configuration file and runs in OpenGL mode or SDL (Sprite) mode.

Based on this mode, we now instance either a GLRenderWindow or a SDLRenderWindow. Both of which are entirely separate and deal with their own types of rendering.

We still have a problem, however. A lot of the old code that was ported over to the component system means that components are responsible for rendering themselves. So the Camera, Model and even the Collision components have some OpenGL code in them. This code is easily lifted and shifted into a specific Renderer system for the type of stuff we’re doing. It also means that the rendering code becomes consolidated in one place and that components lose their awareness of how they get rendered. This is a good thing.

Next up, we need a set of classes and components to mirror the current 3D components. These will be:

  • Sprite - To hold the sprite info (equivalent to Model)
  • SpriteManager - Equivalent to ModelManager, allows loading of sprites from config
  • SpriteComponent - Associates a Sprite to a GameObject
  • Camera2Component - A 2D camera component
  • SpriteRenderer - Renders sprites

Time to get cracking on all this. Hopefully I’ll have something to show soon.

gmc - Hacking a toy VM

gmc: Hacking a toy VM

If anyone remembers me from back in the “old code” days (circa 2006), there was a scripting language I used to use called GameMonkey Script. It was a lua-like language that was designed to be more familiar to C++ programmers. Back in the day, I used to contribute minor changes to the core language itself as well as write about it on

GameMonkey itself is long dead, having been put into a maintence-only mode by one of the original authors (see Greg’s Github and abandoned completely by Matt, the other author. Lua has since hit version 5 and left it behind in terms of raw performance and adoption in the game industry.

However for me, GameMonkey (herein referred to as GM) was the project that kick-started my interest in virtual machines, compilers and other such things that certain programmers get a kick out of. I’ve previously implemented a version of the GM Virtual Machine in C# greenbeanscript and have many long lost projects where I’ve created some small VM to play around with things.

Newer languages such as Rust, Go and even Typescript have intrigued me with how they’ve approached syntax and other language design questions. As a result, I felt the stirrings to mess around in this space again after a long haitus.

First goals

Rather than do as some people would and jump right into LLVM, language theory and other such gubbins, I decided to take things slowly and get my head around building a basic VM and assembler. I largely see this work as throwaway, so I’m free to make mistakes and not feel so bad about it.

So for the first goals, it’d be creating a VM that can run the psedudocode program:

int add(int a, int b)
    return a + b

int a = 100
int b = 200
int c = add(a, b)
  1. Primitive types
  2. Static typing
  3. Local Variables
  4. Function definitions
  5. Function calls with parameters
  6. Return values
  7. Bytecode formats

So for now, I’m going to support only the int type only. I need to decide on how to manage variables, hold script functions and deal with passing of arguments to them.

Bytecode Format

I decided to think about bytecode first as this generally dicates the type of patterns you use up front; specifically whether your VM is stack-based or register-based.

The original GameMonkey script was stack-based; which meant that the majority of the bytecode instructions were simple and took their operands from the stack and pushed their results back to the stack.

For example, adding a = 1 and 2 would be something like:

push 1
push 2
add             // 2 values popped from stack, result pushed back
setlocal @a     // local variable a set from top of stack

Whereas a register-based VM (such as Lua 5) would encode the operands into their instructions itself.

add 1, 2 -> @a

Register-based VMs typically have larger bytecode ‘scripts’ as their instructions are larger (typically 32/64 bits instead of 8), however you can see from the simple example above that in register-based machines, each instruction tends to do more “work”, meaning that the VM spends lets time in the interpreter and so can be much faster.

One of my original thoughts was to speculate about converting the original GameMonkey from being stack-based to being register-based to see the impact; but as that largely meant rewriting the codegen and VM, I decided against it at this point in time (although it would be cool to do that).

As a result of this, I’ve decided that my toy VM will be register-based.

With this decision made, I looked at some descriptions on how Lua does it. I actually like Lua’s implementation here as it makes sense. We’re a VM, so we don’t have to map VM registers onto hardware registers - as a result, they’re basically part of the stackframe.


The stackframe layout for the toy (register-based) VM would be something like this:

0: Parameters [pN]
pN: Constants [kN]
kN: Variable registers [rN]
rN: Top of working stack

With the simple pseudo script of:

int a = 100
int b = 200
int c = a + b

We’d end up with a stackframe that looked something like this:

    [No params]
k0: int -> 100
k1: int -> 200
r0: int -> a
r1: int -> b
r2: int -> c

We can therefore assign variable registers from the constant table such as:

loadk k0 r0     ; load value from k0 (100) into variable register r0
loadk k1 r1

The add instruction would take the two source registers and the destination, eg:

add r0 r1 r2    ; add r0 and r1, store in r2


Now I have a couple of instructions and a general idea how registers work, I think it’d be a good idea for the first parser to be an assembler. The reason for this is that it would speed up my iteration time on scripts and force me with a data-driven way to initialise things like functions and such. The first pass would likely start in code, but I’d like to write the assembler early to dust off my parsing skills before it comes to think about a language.

With this in mind, we could specify the above program as something like this:

.func _main
.params 0
.consts 2
.const 0 int 100
.const 1 int 200
.locals 3
.local 0 int
.local 1 int
.local 2 int

loadk 0 0
loadk 1 1
add 0 1 2
ret 0

It’s not the most readable of syntax, so perhaps we could consider some sort of symbol table in the assembler:

.func _main
.var a int
.var b int
.var c int

loadk 100 a
loadk 200 b
add a b c
ret 0

The consts are now inlined, with the assembler being required to keep track of the parsed constants and assign them to a prototype for the function. Variables are referenced by identifier now, requiring a symbol table to be created.

Seems like creating this assembler would be a logical first step.

Now the question is do I write this in C++ or C99?

Until next time!

Introduction to GameMonkey Script: Part 1 Introduction to GameMonkey Script: Part 2 Continuing GameMonkey Script: Advanced Use

Updating Manta-X (Part 3)

Updating Tech

Whilst I have have stripped out a lot of cruft and slowly begun bringing the project up to date, remember that a lot of the technologies we’re currently based on are very old - even after the update from a couple of years ago when I brought in SDL 1.2, TinyXml 2 and Visual Studio 2012.

I decided that whilst things are fairly simple, I should upgrade some of the foundations - it’ll be easier to do it now before things get more complex.


The first thing I wanted to do was get onto SDL2 as it is the current branch that replaces the old 1.2 branch. The biggest reason for me doing this is that at some point, I’d like to upgrade to OpenGL 3 and that they’ve improved a lot of things, including controller support.

First thing to do was to look at the really handy SDL2 migration guide and get an overview of the main changes. For me, it was mostly around creating the window and handling keyboard input.

I tackled it head on:

  • Downloaded SDL2 version 2.0.4 to deps
  • Updated premake to reference it instead of 1.2 (folder paths, link objects)
  • Compile
  • Fix resulting errors

The errors were exactly as the SDL2 Migration guide suggested. The window creation & OpenGL context stuff was slightly different, which meant changing the RenderWindow. Keyboard input needed to use scancode - so that got changed in KeyStates and PlayerControllerComponent. Once I’d changed a few minor things to handle these changes I was up and running on SDL2 without any side effects. And that was it. The upgrade was committed in a single CL that migrated to SDL2 and removed SDL1.2 entirely.

Visual Studio 2015

Visual Studio 2012 is good, however the toolset supports a very limited version of C++11. There’s a bunch of stuff in the C++11 standard that I like plus I generally perfer to be on a fairly recent compiler version where possible. As a result, I wanted to upgrade to Visual Studio 2015.

Turns out that my prior changes of upgrading to SDL2 and using premake made this as trivial as updating my premake.bat file to pass the argument vs2015 instead of vs2012. And that was literally it. Clean, build, run - it all went through without a single issue.


I used to be a huge fan of XML. Nowadays I don’t touch it if I have a choice. Like all the cool kids out there, I use JSON. I use it professionally and it’s a lot more familiar to me on a hand-authoring and parsing point of view. A couple of months ago, I wrote a JSON Parser for fun to practice my TDD skills. It turns out that this exercise will be useful.

Being that I wanted to transfer a lot of the component initialization code to be data-driven, I decided to start this work by using my JSON library.

Over several commits, I gradually migrated away from XML to JSON, loading components from a JSON file. The cumulated in me removing TinyXML from the build. Like everything, premake has made this sort of thing very easy so far. I’ll write more about the migration to JSON in another post.

If you’re not using it - go use premake now. It’s excellent.

Old Tech - Next Gen

There’s a bunch of old technology in this project still.


I’m using a really old-skool, immediate mode OpenGL renderer. This style of renderer was old-hat back in 2004 and in 2016 it’s prehistoric. I want to upgrade to using shaders, vertex buffers and all that. The thing is, I don’t know anything about it. I’m going have to learn it - this project would be a nice way of learning this stuff as it’s small and could be done pretty quickly. I’ll be looking at this in the future as it’s something that interests me.


I ripped out my own texturing and image loading system as it wasn’t used. I had a basic (uncompressed) TGA loader and that was it. I’m sure that when I need it, there will be a bunch of libraries out there to use. I’m confident that’s not going to be a concern.


The 3d assets in this game are created in Milkshape3d and loaded from the source format. They’re low-poly and untextured - I’m happy with low-poly although this was 2004 low-poly, not 2016 low-poly!

The real problem is that there hasn’t been an update to Milkshape3d since 2009. That’s 7 years ago. It’s abandonware. I could keep using this tool or I could migrate to another tool. Having asked on Twitter, the three main recommendations are:

  • Blender
  • Wings3d
  • Maya LT

They’re all viable (although Maya is a a harder justification due to cost), but the biggest blocker is the tool itself and time. Modern tools seem to have a super-complicated interface compared to Milkshape3d. All of these will take a lot of time to learn to achieve stuff as basic as I have now. However, if I ever wanted to add more content to the game, I’m going to have to decide whether to battle on with a dead tool or to dedicate time to a newer tool.

When it comes to deciding on the tool, the export format will be important - this will basically be my import format, or something I use to derive it. I’m pretty confident that I won’t need to roll my own importer and that assimp or similar libraries will help.


The game currently loads raw source assets without processing. Currently the “conversion” to game format is done at load time. For now, this is acceptable - however at some point in the futre, I’ll likely need to build in a conversion pipeline. I’ll do this when I need to - definitely bottom of the pile for now.

Build Improvements

I’m a HUGE advocate of testing and continuous integration. One huge gap right now is that my only test strategy is to run the game and manually verify the results. This is not a scalable approach. As I add new features and the code gets more complex, things will break. Things will break. And I won’t know about them until it’s too late. Adding a level of testing (unit tests & integration tests) will help here. I’ve been using my own test framework upptest for various other projects, so it’d be easy to add this here.

The next thing that would be useful would be to hook up a system such as Visual Studio Online or Travis CI to ensure that the project builds on a machine that isn’t my laptop. It’d also help for initial non-Visual Studio compilation verification (eg: clang). Again, this could be easy to set up now whilst things are simple - it’s worth considering if I indend to treat this as a proper project.

Wrapping up

There’s a whole bunch of other tech improvements or decisions I can make, however I’m defintely not make them now. I’ll tackle each problem as it comes, making the choice using the information I know at the time.

The next post up, I’ll be talking about how I ripped out the XML ShipClass system in favour of a JSON-based component initialization approach.

Until next time!