Oversimplified History of Retro Game Consoles for Programmers

It is always useful to look at the past to understand the current state of affairs. This article is a brief overview of the history of game consoles from a programmer's perspective. Let's understand the limitations and the driving forces that helped shape the technologies we use today in modern game development.

how to make a game engine

Alright. This is going to be a fun one!

We are going to look at some really cool old tech together, try to understand their limitations, and put things into historical perspective. This will hopefully help us understand why we had to constantly come up with fancier and smarter ways of developing games from one generation to the next.

But first, let me quickly explain the motivations that made me write this article from the perspective of an educator.

The Cost of Abstraction

When students arrive at a game programming class, one of the first things they do is to install a bloated IDE that eats 10GB of their hard disk and consumes ridiculous amounts of RAM from their laptops. On top of that, we have multiple hidden DLLs and dependencies that will become part of the final executable without their knowledge, and we still add another huge layer of abstraction as we ask them to install OpenGL or some other graphics API to take advantage of the GPU (which itself requires students to write some sort of vertex and pixel shaders to run).

Yikes! That's a lot of layers of abstraction and tech jargon for students to digest. And all that just to render a boring triangle on the screen.

opengl triangle
There are a lot of hidden parts that need to work together for us to display a simple triangle on the screen.

Don't get me wrong; abstraction is a great thing and it's one of the most powerful tools we have as programmers. However, in this case, abstraction does not come at zero cost and overwhelms students.

What really is OpenGL and why do we need it? Why do we need a graphics API in the first place? What are inside all these DLLs, and what is an operating system API? Is our project 32-bits or 64-bits? What is a linker error, and how is it different than a compiler error? And ultimately, do we really need all this to code something so small?

Most lecturers will simply tell you that you "don't need to understand how any of these things work to continue coding your project." While this is technically true, it always leaves a bitter taste in my mouth.

As programmers, we are taught to move fast and live with some unanswered questions. Personally, I like to go back and understand why things are the way they are. For me, that usually happens when I look back in time and learn about the motivations that caused technology to evolve the way it did.

In this article, I'll try to do exactly that! Let's go back in time and analyze the history of game consoles to learn why we develop games the way we do today.

Video Game Console Generations

Buckle up and get ready! Let's talk about some cool stuff.

If you ever read anything about video games, you probably know that we divide consoles in generations. These generations typically occur approximately every five years, and this segmentation has to do with market competition and technology advancements.

This division of consoles in generations is not an exact science, and it's questioned by many of the gamedev connoisseurs out there. That being said, I'll still use them to help us make sense of the timeline when we discuss different consoles and their games.

game console generations
A timeline of the nine generations of home game consoles.

Let's hit the ground running with a very short summary going backwards. After we are done with this quick review, we'll proceed to take a closer look at the first six generations and their main technologies.

  • 9th Generation: We have just entered the 9th generation of home game consoles with the PS5 and the Xbox Series X/S slowly making their way into people's homes.
  • 8th Generation: We're also in the middle of an ongoing 8th generation. The main competitors are the PS4, the Xbox One, the Wii U, and the Nintendo Switch. This generation was the one responsible for pushing higher framerates and popularizing 4K resolution games.
  • 7th Generation: This is the generation of the PS3, the Xbox 360, and the original Wii. This was where we saw a major shift into heavy use of online stores and content download as being integral part of new games, as well as the new generation of motion-controlled Wii games.
  • 6th Generation: The 6th generation was marked by the PS2 being best selling console in history, by Sega dropping out of the competition after releasing the Dreamcast, and by Microsoft entering the console market with the original Xbox. This generation focused on making consoles an integral part of the living room, helped by the use of DVD and HD resolutions.
  • 5th Generation: Here is where Sony entered the home console market with the original PlayStation. Other famous consoles of the era were the Nintendo 64 and Sega Saturn. This generation saw the jump from 32 bits to 64 bits, as well as a huge wave of 3D titles for the home console with dedicated polygon-rendering hardware.
  • 4th Generation: The 4th generation was the marked by the explosion of 16-bit games with the release of Nintendo's SNES and the Sega Genesis (Mega Drive). This was the generation responsible for bringing the audiovisual quality of top arcade machines to the home market, as well as embodying the famous company mascots in the public's consciousness with Sonic (Sega) and Mario (Nintendo).
  • 3rd Generation: This was known as the 8-bit generation, with popular consoles like the NES (Nintendo Entertainment System) and the Sega Master System. This was the era of colorful graphics that helped bring the inventive Japanese game design to the global masses.
  • 2nd Generation: This was the generation of game cartridges. The Atari 2600 VCS was the most famous machine of this era, followed by the Intellivision and by the lesser known Fairchild Channel F. We saw a shift in the way we think of consoles, where one machine could now run multiple games.
  • 1st Generation: This generation was composed of dedicated consoles, where each machine could play only one or two games pre-built to the console hardware. The first home console was the Magnavox Odyssey, followed by Pong and a wave of Pong-clones.

There we go! A super short review from the current 9th generation back to the genesis of home consoles in 1972. If you had any contact with video games in your life, chances are you recognize some of the names I just mentioned.

Now, to really understand how evolution worked in terms of technology, we need to go the other way around. Let's start from the first generation and climb our way up until the PS2 & the first Xbox. But for that, let's put our programmer's hat on and pay close attention to the hardware and software limitations of each era. Understanding the limitations will help us better understand the technologies that were created to solve them.

We'll start with the generation that still thought one machine should be responsible for doing one thing and one thing only.

First Generation (1972-1980): One Machine, One Game.

This first generation was trying to bring the "arcade experience" into people's homes.

Before home consoles, we had big bulky arcade machines with a single game manufactured into the hardware. Pong was one of the earliest arcade games. It was created by Allan Alcorn as a training exercise assigned to him by Atari co-founder Nolan Bushnell, but the whole team was so surprised by the quality of Alcorn's work that they ended up manufacturing the game.

atari pong arcade machine

The technology of these first consoles was very basic. One thing that most students don't realize is that these machines had no microprocessor. There was no CPU orchestrating the logic of the game, and the entire game logic was simply the result of signals flowing through discrete circuits.

For example, the Pong arcade game was just a series of hardwired logic gates, flip-flops, resistors, all ticking at a certain frequency. A crystal clock oscillates and feeds pulses to the digital circuit. This logic dictates if the output video signal at each scanline is white (on) or black (off) on the screen.

pong discrete circuit
These first machines had no microprocessor, and the game logic was controlled using discrete circuits.

In the above image, the circuit is designed to implement a vertical counter that intercalates between video signal on or off. This helps us generate the net in the middle of the screen as the CRT scan lines from top to bottom. This rudimentary way of displaying video signals was also the reason why most games could only display rectangular graphics (net, paddle, ball, score, etc.).

Following the desire of bringing arcade inside people's homes, we ended up with dedicated small consoles with one or two games pre-built into the hardware. And even those consoles that could run more than one game, changing games usually meant flipping a physical switch on the back of the machine.

magnavox odyssey

The Magnavox Odyssey was the first documented home video game console, being released in September of 1972. Originally built from solid-state circuits, Magnavox transitioned to cheaper integrated circuit chips to develop a new line of machines, the Odyssay series, that were released between 1975 and 1977.

The Odyssey also had no microprocessor. It was only capable of displaying three square dots and one line of varying height in monochrome black and white. The behavior of the dots changed depending on the game.

Odyssey users could move the dots using the controllers, but to add "fancier" visuals, games included a plastic overlay (a thin plastic sheet) that users could stick to the TV screen. These transparent overlays sticked to the screen via static cling. For example, a colorful overlay might add obstacles that the dots should avoid or lines that the dots should follow. Pretty ingenious if you ask me.

odyssey overlay
Here we have an Odyssey game without overlay and another game that uses plastic overlays to create visuals.

Atari launched Pong as an arcade game in November of 1972, and eventually partnered with Sears to launch the new home Pong console for the Christmas season of 1975. This machine offered several advantages over the Odyssey, including an internal sound chip and the ability to keep track of player score.

atari pong arcade machine

Numerous third-party manufacturers entered the console market by 1977, mostly simply cloning Pong or other games. This led to saturation with several hundreds of different consoles of poor quality to choose from, causing a game market crash soon after.

It was also clear that designing and manufacturing an entire new machine to play just a few games was not going to cut it! Companies had to find a cheaper way of distributing games. As you'll see in the next generations, price is a recurrent deciding factor when developing and adopting new technologies. Let's move forward and learn about the second generation of game consoles; the generation of game cartridges!

Second Generation (1976-1992): The Cartridge Era.

The second generation of home consoles was distinguished by the introduction of the game cartridge. With this technology, the code of each game is stored inside a cart using read-only-memory (ROM) chips.

When we plug-in a new cartridge, the console has direct link to the game code that is stored in the cartdrige ROM, so there is no need to pre-load the code into memory. The console, now powered by a microprocessor, has direct access to the ROM's content which is mapped into the main console memory. This approach was a lot faster than other storing alternatives of the time, like cassete tapes or floppy disks.

The main consoles of this generation were the Fairchild Channel F, the Atari 2600 VCS, and the Intellivision. The Atari 2600 VCS became the most popular game console of the 2nd generation, and it helped spread the adoption of game cartridges as the main media for storing and ditributing games.

atari 2600 vcs
Being able to use cartridges to play multiple Atari 2600 games was definitely a game changer... ha!

Programming for these machines was not an easy task. The Atari VCS had a slow 1.19MHz processor, 128 bytes of RAM, and we had only 4kB of ROM to store the code instructions of our game. The entire game, including logic, graphics, sprites, bitmap patterns, and all game data had to fit in less than 4kB of ROM.

Fun fact: Any PNG image on this website is bigger than 4kB, and therefore bigger than any game you've ever played on the Atari 2600. Although, some programmers used a technique called bank-switching to increase ROM size in later games.

The architecture of these machines were still very rudimentary. One could easily draw the schematics of the Atari 2600 by hand. You'll also hear programmers say that coding for the Atari VCS required us to "race the beam."

racing the beam
The Atari 2600 is programmed as the electrom beam scans the display from top to bottom.

Racing the beam means that we must time the clock cycles of each CPU instruction and match that to the time required to render objects on the CRT display. We time each CPU instruction to follow the CRT display beam, which sends instructions of what needs to be rendered in real-time, as the beam traces the screen from top to bottom... fun times!

atari combat tank
The VCS had special memory registers for different screen objects (player, playfield, missile, and ball).

The TIA was a custom chip that generated the video output for the VCS, generating the screen display and the sound effects. As described above, the chip was designed around not having a frame buffer, relying on the CPU to send the correct signals and timing them precisely to display the objects on the screen.

If you want to learn the dirty details of how to program games for the Atari 2600, I have a cool course that teaches coding for the VCS using assembly language. We use the super simple 2600 architecture to learn how digital machines work under the hood. If this sounds like fun, visit the courses page.

atari 2600 programming tutorial

Before we say goodbye to the second generation, I just want to briefly mention something else that was very important in terms of technology history. The Atari VCS was powered by a microprocessor, which is a special integrated chip that orchestrates how the data moves and is processed inside the console. When Atari was considering a processor to power the VCS, CPU manufacturers like Intel, Motorola, and MOS started a price fight. MOS was the only company that could reach the price point Atari was willing to pay. This was an extremely important step in technology history, as we were observing the first steps of a super famous family of processors known as 6502.

atari 2600 6507 cpu
The Atari 2600 was powered by a 6507 CPU, which was basically a cheaper version of the original 6502.

The 6502 CPU ended up powering several video game consoles and microcomputers. Most programmers of the 70s and 80s had to code for some type of 6502 processor. Besides the Atari 2600, some other famous machines that used a 6502 processor were the Nintendo Entertainment System (NES), the Commodore VIC-20, the Commodore 64, the Apple II, the Tamagotchi, and many (many!) others.

Third Generation (1983-2003): The 8-bit Generation.

During the video game crash of 1983, the United States lost its interest in console games in favor of personal computers. The 3rd generation was responsible for making video games popular again in America.

The machines of this generation used 8-bit processors, allowing programmers to push the boundaries of what was possible in terms of graphics and audio. Colorful sprites and tiles made the creativity of Japanese game design famous in the entire world.

The most popular consoles of this generation were the NES, the Sega SG-1000, and the Sega Master System.

Nintendo originally released the Family Computer (or Famicom) in Japan, but rebranded the console to NES in North America to avoid the association with the ugly term "video game" after the crash.

nintendo nes
The NES (Nintendo Entertainment System) was the most popular console of the 3rd generation.

This generation of consoles holds a special place in my heart. Growing up in Brazil, the first console I ever owned was a Taiwanese NES clone called "Micro Genius"!

Fun fact: My console came with only one game, Double Dragon II, which I played over and over for several months. I also did not know English, so I just assumed one of the characters was called "Double" and the other one "Dragon."

These machines allowed for more interesting graphics than the ones from the 2nd generation. Now we could have up to 5 bits for color (2⁵ = 32 different colors), five audio channels, and other advanced graphics effects. For example, the NES could handle a set of sprites and tiles by using a dedicated picture processing unit.

The PPU generated the video output for the NES, running at 3x the frequency of the CPU (each cycle of the PPU outputs one pixel while rendering). The NES PPU could render a background layer and up to 64 sprites, where sprites could be 8x8 or 8x16 pixels. The background could be scrolled in both X and Y axis, including "fine" scrolling (moving one pixel at a time). Both background and sprites were made from 8x8 tiles, and these tiles were defined in "pattern tables" inside the cartridge ROM.

nes ppu
Pattern tables in the PPU definined the tiles that could be used in Super Mario Bros.

Right, so that's a lot of information and numbers to remember about how graphics work on the NES. All you really need to take from all this is that working with tiles and sprites was a lot easier than having to code raw blocks of bitmap in ROM like we did with the VCS in the previous generation. The PPU (picture processing unit) handled the graphics part of NES games, and the APU (audio processing unit) handled the audio.

The processor on the NES was a 6502-based Ricoh 2A03. Virtually every serious game that was developed for the NES was programmed in 6502 assembly language.

Although the machine architecture of most third-generation consoles was still fairly simple, the slow speed of the CPU (1.79MHz on the NES) meant every clock cycle was crucial when programming a game. NES programmers often knew by heart how many clock cycles each CPU instruction took. Counting clock cycles for every line of 6502 assembly code was a very common practice in this generation as well!

Given that the C programming language was already a useful tool in the computer science world, wouldn't it be nice if we could use a high-level language to write games for the NES?

C was basically a "portable assembly", meaning that if we have a C compiler that knew how to generate compiled MOS 6502 code, we should theoretically be able to write our games in C instead of writing 6502 assembly by hand.

There were some small C compilers that could spit out 6502 machine code for the NES. Unfortunately, the compiled output was too slow for it to be a viable option. Therefore, if you were a NES programmer and you wanted to squeeze everything you could out of the hardware, there was no other option than to write 6502 assembly manually.

But technology evolves. In order to create better graphics and allow for more complex games, the next generation had to raise the bar once again. We are about to enter the revolutionary 16-bit generation of home consoles!

Fourth Generation (1987-2004): The 16-bit Generation.

Most consoles of the 4th generation took advantage of 16-bit processors. In simple terms, the registers inside these CPUs could now store and manipulate 16-bit values. Besides a faster CPU clock, these new CPUs could access exponentially more memory addresses than the previous generation. As it was expected, all this horsepower meant two important things: better graphics and better audio quality.

Strangely enough, the first console of the 4th generation - the NEC TurboGrafx-16 - was powered by an 8-bit CPU. Nintendo and Sega entered the competition with the SNES and the Sega Genesis, both running true 16-bit processors.

snes sega genesis turbografx
The holy trinity of the 4th generation: NEC TurboGrafx-16, Nintendo SNES, and Sega Genesis.

Fun fact: If you were a kid in Brazil in the early 90s, you probably never heard of the name Sega Genesis. If you are not from the US, chances are you know the Genesis by its original Japanese name, Mega Drive.

Okay, so let's pause for just a second. The NEC TurboGrafx-16 had an 8-bit CPU and still managed to compete with those other powerful 16-bit machines? How does that work?

TurboGrafx-16

NEC chose the 65C02 processor to power the TurboGrafx-16. The 65C02 was a fast 8-bit processor running at impressive 7.6MHz. To put things into perspective, the 16-bit processor of the SNES ran at miserable 3.6MHz!

Nintendo had to cap the SNES CPU clock at 3.6MHz because they chose to use slow RAM and ROM chips. It really does not matter if we have a super fast CPU, slow RAM speed will always be a bottleneck on the machine's performance.

This romance between CPU speed and RAM speed is still valid today. Most people think of performance just in terms of CPU clock speed, but since the nature of most programs is to constantly read and write values from memory, it is important to pair CPU speed with RAM speed. That's why things like RAM clock and dual channel are so important for performance, and that's also why companies like Apple decided to design their M1 chip with RAM sitting right beside the CPU circuitry providing direct memory access.

apple m1 ram
Apple's M1 "Unified Memory Architecture" views RAM as a single pool of memory that all parts of the processor can access. GPU, CPU, and other parts of the chip can access the same data at the same memory address.

So, there we go! NEC managed to compete in the 16-bit market by using a fast 8-bit CPU and pairing it with fast RAM and ROM chips.

bonk adventure turbografx
Bonk, from Bonk's Adventure, was the mascot of the NEC TurboGrafx-16.

The superior graphics quality of most TurboGrafx-16 games was achieved by taking advantage of a powerful 16-bit graphics chip, which was also the reason NEC started to advertise the TurboGrafx-16 as a "16-bit machine."

SNES

Most people usually remember the SNES as the most capable console of this generation. The SNES was not the most powerful machine, but I'll bet the fame comes from its superior graphics PPU. The picture processing unit on the SNES was capable of achieving some very interesting effects. The programmer had access to eight video modes, included the famous Mode 7, allowing the background layer to be transformed (scaled, rotated, translated, reflected, etc.) by using matrix transforms. If you ever played Super Mario Kart or Pilot Wings, those fake 3D effects are created with Mode 7.

mario kart mode 7
Mode 7 being used to transform the background layer on Super Mario Kart.

The SNES's 16-bit CPU was based on 6502 architecture. Once again, due to the design nature of the MOS 6502 processor, the lack of registers, and the overhead produced by most C compilers of the time, most SNES games were programmed using hand-written 6502 assembly.

Genesis (Mega Drive)

Sega decided to go with a more powerful processor for the Genesis. The Motorola 68000 CPU ran at 7.6MHz (twice the speed of the SNES) and was a hybrid 16/32-bit processor. The 68000 became a very popular CPU, becoming the processor of choice of microcomputers like the Commodore Amiga and the original Apple Macintosh.

motorola 68000 mega drive
The Motorola 68000 on the Sega Genesis was a hybrid 16/32-bit CPU running at 7.6MHz.

Here is where we start to see a huge shift in the way we developed games. The available hardware power of the Genesis meant that some games (or at least parts of them) could now be written in C!

Even though 68000 assembly is considered by many as one of the most human-friendly assemblies out there, being able to write parts of the game using a high-level programming language was a huge deal. C compilers were getting faster and better at spitting out optimized compiled code. As we'll see soon, writing C would become the preferred way of developing games for the next generation of consoles.

Fun fact: Even though the Sega Genesis didn't have the powerful graphics PPU of the SNES, the fast 68000 processor allowed Genesis games to achieve effects similar to those of Mode 7 via software (only using the CPU).

pier solar mode 7
The Adventures of Batman & Robin achieved Mode7-like effects on the Genesis via software.

Something else that is probably becoming very clear to you is that using the CPU alone to push pixels is not going to cut it. Did you notice how most consoles are choosing to have a dedicated graphics unit to work together with the CPU? This is going to become extremely popular in the following generations. This is also true for modern gaming devices, as most machines come with a dedicated graphics unit (or graphics card).

By the end of the 4th generation, Nintendo used dedicated overclocked coprocessors manufactured into the game cartridge to enhance the graphics capabilities of games. The most famous example is the Super FX chip. The Super FX was designed by Argonaut Games, who also co-developed the 3D game Star Fox with Nintendo to demonstrate the additional polygon rendering capabilities of the chip.

star fox super fx
Star Fox is generally considered one of the first games to use real-time 3D polygon rendering on consoles.

The Super FX was a custom-made RISC coprocessor that was typically programmed to act like a graphics accelerator chip to draw polygons to a frame buffer in the RAM that sits adjacent to it. The data in this frame buffer is periodically transferred to the main video memory inside of the console using direct memory access in order to be displayed on the TV screen.

super fx 2
Super FX 2 chip sitting inside the cartridge of Super Mario World 2: Yoshi's Island.

Fun fact: While in development, the Super FX chip was codenamed "MARIO chip", with the initials standing for "Mathematical, Argonaut, Rotation, & Input/Output".

There was also a Super FX 2 chip, which was a faster Super FX mode that ran at impressive 21.4MHz. The game Star Fox 2 used the full power of the Super FX 2.

And speaking of 3D games for the SNES, some people say, "But Gustavo, I remember playing Donkey Kong Country for the SNES, and it had some cool 3D objects with lighting and shadows. Don't they also render real-time 3D polygons?"

Well, no! Not really. The developers of Donkey Kong Country bought a super expensive Silicon Graphics station to model the 3D objects of the game. But at the end, they simply use pre-rendered 2D sprites of those 3D models.

donkey kong country
Donkey Kong Contry for the SNES with "fake" 3D visuals.

So, when we play Donkey Kong Country on the SNES, what we are seeing is a just a series of pre-rendered 2D images that gives us the impression of a 3D game.

donkey kong country spritesheet
2D spritesheet for one of the animations of Donkey Kong Country.

Beautiful! This was an amazing era, and it helped me cover some super interesting concepts. But it's time to move forward; we are about to enter the next generation of consoles. Let's go 32-bits!

Fifth Generation (1993-2006): The 32-bit Generation.

Now we're talking! The 5th generation of home consoles was the big jump from 16-bits to 32-bits. This shift allowed machines to access exponentially more memory addresses and games could take advantage of faster CPU clock speeds.

Home computers were annoying the home console market once again. PCs had fast processors, multimedia, and computer GPU acceleration was starting to become a thing. Consoles were forced to step up and get better at pushing not only fast pixels, but fast real-time 3D polygons to the screen.

The 5th generation introduced Sony as one of the main players in the home console market with the release of the original PlayStation. Chronologically, the big names of the 32-bit generation were the Sega Saturn, the Sony PlayStation, and the Nintendo 64.

saturn playstation nintendo 64
The main consoles of the 32-bit generation: Sega Saturn, Sony PlayStation, and Nintendo 64.

Some other important technologies also gained space in this generation. The price of optical disks (CD-ROMs) dropped sufficiently to make them attractive to the console market. They were already the preferred way of shipping games for the PC, but it was the PlayStation that popularized the use of CDs for console games.

jet moto playstation
Jet Moto showing the fast 3D polygons that were possible with the PlayStation.

The Nintendo 64 still used game cartridges, as Nintendo believed the load-time advantages of cartridges over CD-ROMs was still essential (as well as their ability to continue to use lockout mechanisms to protect copyrights).

In the fifth generation we also saw a shift in the way games were programmed. Finally, with faster CPU speeds and compilers becoming more powerful, games were developed using C. And I invite you to stop for a moment and contemplate what this really meant for game developers. As hardware became faster and faster, we could now trade a little bit of computing power for "programmer happiness."

Most C compilers were reaching a maturity state where the assembly generated by them was similar to what a human would write. We were finally at a point in tech history where assembly was something for compilers to generate, not humans.

Fun fact: Modern compilers will generate better and more optimized assembly output than most programmers... but you did not hear that from me.

I think it's also important to explain how this is related to some hardware decisions of that era. The 5th generation used 32-bit processors based on RISC architecture.

CISC vs. RISC

RISC and CISC are different approaches of processor design. RISC stands for Reduced Instruction Set Computer and tries to reduce the cycles per instruction at the cost of the number of instructions per program. CISC stands for Complex Instruction Set Computer and attempts to minimize the number of instructions per program but at the cost of increase in number of cycles per instruction.

intel 386 cpu

For example, both Motorola 68000 and Intel x86 are based on CISC architecture (complex instruction). CISC executes many low-level operations per instruction, making CISC assembly usually smaller and easier for humans to write and reason about.

On the other hand, the NEC VR4300 CPU on the Nintendo 64 is based on RISC. RISC instructions are faster, usually taking only one clock cycle. Simple instructions mean longer programs. So it might be more annoying to write RISC assembly manually, but it runs more efficiently and it usually consumes less energy.

nec vr4300 cpu

In summary, CISC instructions pack a lot of functionality and require many CPU cycles to execute. RISC instructions are short and streamlined, but take fewer CPU cycles to run.

As a simple example, think of a hypothetical CISC processor that has a built-in instruction to compute the factorial of a number that is loaded in a register called A. For the sake of simplicity, let's call this assembly instruction "FACT".

-- Factorial of 6 on CISC
LOAD A, 6
FACT A

On a RISC processor, the programmer would need to write a larger program to compute the factorial of a number, potentially using a series of simple MUL instructions to multiply values together.

-- Factorial of 6 on RISC
LOAD A, 6
MUL 5
MUL 4
MUL 3
MUL 2

You probably heard of ARM processors that power mobile phones and tablets. ARM CPUs are based on RISC and became a popular choice for mobile devices and energy-efficient hardware. The Apple M1 chip is also based on ARM/RISC.

Writing assembly by hand was a common task back in 1989, but nowadays we can clearly see the appeal of a simpler, more efficient, and more streamlined RISC architecture. If we consider that the majority of today's assembly code is generated by compilers and not by humans, we could almost say that the RISC architecture was ahead of its time.

Cool! I hope this clarifies what was going at the time in terms of CPU architecture.

Firmware

Something else that happened with the 32-bit jump was the introduction of a strong layer of abstraction between the programmer and the hardware. Previous consoles didn't have the concept of an operating system sitting between the developer and the machine, and we had direct access to the hardware. The 32-bit generation introduced the idea of a firmware, which acted like a lightweight OS for the console.

It was necessary to introduce a firmware because of how complex the machines were getting. The firmware acted in the middle of the code and the hardware, exposing the low-level functionality that the programmer can use. Companies like Sony and Nintendo also provided some sort of SDK and programming libraries that developers should use to create games for their consoles.

3D Polygons

The fifth generation also saw an explosion of 3D titles. Taking advantage of dedicated polygon-rendering chips, these 32-bit consoles could now push fast polygons and render real-time 3D graphics. Of course, we are talking lowpoly 3D meshes and low-resolution textures, but it was absolutely mind-blowing to see early 3D graphics running smoothly on these machines!

super mario 64
Developed by Nintendo, Super Mario 64 was the first Super Mario game to feature 3D gameplay.
GPUs

Being able to emulate console games on a PC was always a big deal for gamers. Emulating games from previous generations on a home computer was technically not that hard, since most PC processors of the time could render 2D pixels and compute 2D effects via software. But 5th generation consoles now came with complex dedicated polygon-processing hardware, so it became extremely difficult for the PCs to emulate these games using only CPU power.

Some PC emulators for PlayStation and Nintendo 64 took advantage of hardware acceleration. If you wanted to really experience the power of late PlayStation or Nintendo 64 3D games, you most likely wanted to have a GPU card installed on your computer.

3dfx voodoo

One of the pioneers in the field of graphics acceleration was a company called 3Dfx, making a dent in the PC market with the famous Voodoo GPU. To take advantage of their graphics card, programmers used a library called Glide, which was developed and distributed by 3Dfx itself.

Therefore, some emulators tried to emulate the console's polygon-rendering capabilities on the computer GPU using the Glide API. If you ever installed emulators like Bleem! or ePSXe for Windows, you probably had to find the correct DLL version of the Glide API for your system. These DLLs contain the implementation of the functions exposed by 3Dfx that the emulator can use to access the graphics card.

To get an idea of how GPU cards work, let's look at a very simple example.

Let's imagine we are developing a game for a console with a CPU running at 7.6MHz (7.6 million instructions per second), and a screen resolution of 320x224. That's a total of 71,680 pixels for a single frame of our game.

resolution 320x224

Now imagine that we must convert a certain scene of the game to grayscale in one of the levels. Using only CPU instructions, we'll probably need a while-loop to visit every pixel of the screen changing them one by one. That's 71,680 pixels per frame! And remember that we are inside a game loop, so this this needs to happen 60 times per second.

That's a lot of CPU clock cycles! Even if we assume that each pixel operation takes only one clock cycle to execute, that's 71,680 cycles times 60, giving us a total of 4.3 million instructions in one second. That's more than half of what our 7.6MHz CPU can handle. Not cool!

Now, let's consider the same problem but with the help of a GPU. Different than CPUs, GPUs contains tens or hundreds of small cores. All these cores of a GPU mean one thing: processing large quantities of data in parallel!

Besides multiple cores, GPUs love to work with arrays. Therefore, instead of performing an instruction on one single value, GPUs will process large vectors of data per cycle. If we need to process all those 320x224 pixels of our screen, we can send large arrays of pixels to be processed. The GPU will "divide-and-conquer" the problem and use as many cores as possible to complete the task.

gpu hardware acceleration
The GPU pipeline can process large arrays of 3D vertices and pixel operations.

The key word here is parallelism. It just so happens that most computer graphics problems can be optimized using this approach.

Just keep in mind that GPUs solve a very specific issue, and not every computation can be optimized by throwing GPUs at it. If a problem is linear (where the next instruction depends on the result of the previous one), then a CPU is still the best tool for the job. But in our original example, since each row of pixels doesn't depend on the result of the previous row, the GPU will definitely improve the speed of the rendering.

One last thing I want to mention about GPUs is that, back in the day card manufacturers used to provide custom libraries for developers to work with their GPUs. If we take the example of the defunct 3Dfx Voodoo, the Glide API was a 3Dfx library that only knew how to communicate with 3Dfx cards. If you had a GPU from a different vendor, you needed a completely different API. Not cool!

As computer graphics evolved, there was a strong movement to find a standard for how we program GPUs. This is why we have things like OpenGL and Vulkan. These are open libraries that will abstract how GPUs are implemented and work with all cards. This makes programming modern graphics and working with modern GPUs less cumbersome than programming in the 3Dfx/Voodoo era.

opengl api
OpenGL is a popular graphics API that exposes access to the GPU.

Modern GPUs also allow the programmer to send small scripts with instructions of what the GPU needs to execute for every vertex and every pixel of our 3D scene. You probably know these scripts as shaders. The code inside these shaders will dictate how the GPU should transform each vertex of our 3D model (vertex shaders) or paint each pixel on the display (pixel shaders). This is the reason modern GPUs claim they have a programmable pipeline, which is basically a way of saying developers can use shaders to program how the graphics pipeline will process vertices and pixels when rendering. To relieve the main processor, additional processing steps have been moved to the pipeline and the GPU.

Alright! We have used the 5th generation to explore several important ideas of digital machines and programming that are still valid today. Let's go back to our original plan and learn about the next generation of home consoles.

Sixth Generation (1998-2013): Keeping up with the PC.

In the sixth generation, consoles began to catch up and match the performance of personal computers of the time. This will be the last generation that we'll cover in this article, since our focus is mostly retro hardware and early technologies.

With the adoption of the DVD as its primary media and the spread of both plasma and LED screen TVs, the consoles of the 6th generation tried to converge the features of other electronic living room devices.

Up until this point, the three main home console contenders were Sega, Nintendo, and Sony. The sixth generation saw one of the contenders drop out, and a new contender take its first steps.

Sega released the Dreamcast in November of 1998. Powered by a Hitachi SH-4 CPU and a NEC PowerVR2 GPU, the Dreamcast is considered to be the first console of the 6th generation. It was also the first console to include a modem to allow players to connect to the Sega network and play online games.

sega dreamcast

The Dreamcast was largely outperformed by the Sony PlayStation 2. This was the last console released by Sega, who became a third-party software publisher after that.

nintendo gamecube

Nintendo's 6th generation console was the GameCube, launched in September of 2001. It was the successor of the Nintendo 64 and the first Nintendo console to use optical disks. It introduced a proprietary miniDVD as its storage medium and was capable of storing up to 1.5 GB of data.

Personally, I always thought that the GameCube was technically very interesting. Nintendo partnered with IBM for the CPU and with ArtX/ATI for system logic and GPU. IBM designed a RISC PowerPC-based processor for the GameCube called Gekko. The CPU ran at 486MHz and featured a powerful floating point unit (FPU). The GPU was codenamed "Flipper" and ran at 162 MHz, containing not only the graphics unit, but also audio DSP and I/O controllers.

The reception of the GameCube was mixed. It was praised by its innovative controllers but criticized by the overall lack of features. Unfortunately, the console sold much less than Nintendo anticipated, and it was discontinued in 2007.

playstation 2

The PlayStation 2 was an absolute monster! It was powered by a custom-designed 128-bit R5900 Emotion Engine CPU. Its GPU (called Graphics Synthesiser) was also custom-designed and was capable of rendering up to 75 million polygons per second! Besides the tech specs, Sony's decision to ensure the PS2 was backwards-compatible with previous PS games was a genius market move. With 155 million copies sold, the PS2 is the best selling video game console in history!

At this point, Microsoft saw the success of the PS2 as a threat to the market of gaming on the personal computer. The Xbox became the first console released by Microsoft, and it was designed based on Microsoft's experience with personal computers.

The XBox had an OS based on Microsoft Windows with DirectX features. It ran a custom Intel Pentium III CPU, used a hard disk to save game state, had built-in Ethernet functionality, and it was the first console to have an online service (called Xbox Live) to support multiplayer games.

xbox

In terms of how we develop games, we saw clear cues that games were becoming a product of extreme collaboration. We're talking tens or hundreds of people working together on a single game. Modern game studios started to fill warehouses with illustrators, programmers, 3D artists, level designers, managers, and many other professionals working alongside the development team, like marketing and research.

All this collaboration and the huge number of developers working on a single game is one of the reasons we need to use source control tools like Git or Mercurial today. Also, with strict market deadlines, game studios even borrowed some traditional productivity tricks from the software engineering world, like XP or SCRUMM.

We also saw the adoption of programming paradigms that helped development teams reason about software quality, growth, and maintainability. Different ways of thinking about software components, like object-oriented programming (OOP) and functional programming (FP) are now common in game code. C++ became the industry standard language for games, mostly because of its ability to deliver compiled performance while also offering good support for OOP.

With complex machines and CPU cycles to spare, the use of general-purpose IDE-like game engines also became a reality. Engines like Unity and Unreal are great productivity tools and help many games see the light of day. Developers can now plan their code and talk about their games in abstract terms like game objects, entities, components, while level designers can simply decide how these pieces interact with each other using a high-level scripting language.

unity game engine
Unity helps developers create a game once and deploy it to different platforms (Xbox, PlayStation, PC, etc.)

Going back to the development of games for the PlayStation and XBox, developers are pretty much tied to a game engine or to SDKs provided by Sony and Microsoft. Not only libraries and tools, but entire ecosystems for registering developers and selling games. As you'll see, the increasing distance between programmer and machine is here to stay. The era of programming by directly poking bits in memory is long gone.

Bringing it Home...

And here is where we'll stop. Super cool stuff, right? I told you this was going to be a fun ride. And it ended up being a long one as well.

We just visited the fist six generations of home game consoles giving a very high-level overview of the tech that revolutionized each one of them. Of course, there are many other interesting ideas that were adopted in the subsequent generations, but the ones we covered are probably the most important ones in terms of hardware evolution and game design decisions that linger until today.

If you think I forgot an important detail (which I'm sure I did), you can always yell at me on Twitter or even subscribe to my YouTube channel.

I really hope putting things into historical context helped you understand the driving forces that caused these technologies to be created in the first place. Hopefuly it also helps us answer some of the questions we had when we first started. You can probably explain in simple terms why we need to link OpenGL or Vulkan with our projects, or why we need to use the operating system API when we want to poke the hardware of our system. If that's the case, then I believe the long journey was worth it.

Take some time to also contemplate all the beautiful tech we touched. Cartridges, coprocessors, dedicated graphics units, RISC, CISC, polygon-rendering chips, firmware, GPUs, graphics APIs, and so (so!) much more. Hopefully you learned something new and gained something valuable out of this exercise.

Learning how to code on a high-spec PC using the latest shining tool is definitely great! But looking at the past and understanding why things are the way they are is also important. It's an opportunity for us to understand why we do things the way we do, while giving us a good idea of what to expect from the future.

See you soon!