Monthly Archives: April 2026

OSI Is Not What You Think

OSI Is Not What You Think


If you’ve ever studied anything at all to do with computer networking, you’ll almost certainly have heard of the OSI model.

For over 40 years, this model has been decorating the early chapters of networking textbooks as a description of how networking is implemented as a stack of seven layers, starting from the physical media layer at the bottom – which covers everything from cables and radio to pigeons. As you move up the stack, the layers become increasingly abstract, culminating in the Application layer at the top; this is where humans typically interact with the network through web browsers, instant messengers, and so forth.


But what’s rarely even acknowledged nowadays is that OSI is far more than this seven layer model, and was in fact a fully fleshed-out set of standards and networking protocols that were originally intended to be the underlying technology of the Internet, before TCP/IP usurped it. In fact, as we’ll see, there are still vestiges of OSI deeply embedded in the current internet technologies we use every day, like living fossils of a bygone age. Like the dinosaurs, they are a reminder that things could have turned out very differently.

It all goes back to the 60s, when people were starting to try and find ways to get their computers to talk to each other. The US Department of Defence established a research team in an attempt to develop a standard way for computers to communicate, that was decentralized, and not dependent on specific manufacturers. This was the ARPANET;  over a decade, it brought forth a whole suite of concepts and protocols including TCP/IP, which proved very effective, robust, and scalable.

By the late 1970’s other countries had started to standardise their national computing infrastructures using their own protocols and technology, and it became apparent that an international standard was needed, so that we could all join hands across the globe, in perfect harmony. So, in 1978, the International Organization for Standardization (ISO) started working on an open standard for international computer networks which they named, confusingly, Open Systems Interconnection (OSI). Thus the seven layer model was born.

The revolutionary idea of the layers allowed a modular approach to networking so that each layer only needed to know how to pass messages to the adjacent layer and could remain ignorant about the rest of the stack. Sending a message from one system to the other would involve it moving down through the stack until it reaches the physical layer, at which point it’s transmitted. At the other end, the physical layer would receive the message, and pass it up through the stack to the correct recipient. Conceptually, each layer would believe it was directly communicating with its counterpart layer in the other system. 

You may be wondering why there are seven layers in particular, and that’s a really good question. Obviously the ISO thought it provided the optimal set of separations and abstractions, but I can’t help thinking there’s just something mystical about the number seven, or maybe it was someone on the committee’s lucky number. Who knows?

But the ISO didn’t stop at the model; they continued to develop protocols and standards for all seven layers. Not only that, but they developed application layer protocols for email (called X.400), file transfer, and also directory services (X.500). They were standards-producing fools! 


The UK government was impressed, and published  the “Government Open Systems Interconnection Profile” – more commonly known as GOSIP – which was a specification for implementing OSI as the mandated standard for the UK. Soon afterwards, the US published FIPS 146-1 (AKA US GOSIP), which was also a mandate to use OSI, and it wasn’t long until the rest of the world followed suit. OSI was the future for global networking; one stack to rule them all. 

But, as we now know, it didn’t happen. TCP/IP became the single stack that formed the Internet, and OSI all but disappeared.

TCP/IP does have a layered protocol stack, but instead of seven layers there are only four or five (depending who you ask). The lack of three layers doesn’t seem to have hampered its effectiveness. These layers also don’t really line up cleanly with the OSI model, with some parts of TCP/IP bleeding across OSI’s boundaries – but essentially, it works in exactly the same way as the model suggested: each layer communicating with its remote counterpart as if they were connected. Not only do the layers not align with OSI, the protocols are completely different and utterly incompatible. So what went wrong?

Perhaps the main reason for OSI’s failure is the timing. While the ISO were busy in their committees, over-engineering the minutiae of every single aspect of the mystical seven layers, the ARPANET was already up and running with TCP/IP, happily allowing computer networks across the world to talk to each other. The various GOSIP directives meant that this could only be a temporary state of affairs until everyone transitioned to OSI. If you look back at network documentation from the time you’ll see that people were discussing the OSI transition as a fait accompli despite the reality of TCP/IP’s dominance. This makes a lot of sense when you consider the amount of money and effort that had already gone into OSI development by governments and manufacturers alike. For example, the Digital Equipment Corporation (DEC) had rearchitected the new version of its DECNET networking stack to use OSI at its heart, while the other big hitters like Sun, and even Windows had OSI support options. 

There were numerous other issues with OSI that hindered its adoption; not least of which was the design process. OSI was designed by committee, which lead to complex and over-engineered protocols that attempted to fit all of the scenarios they could envisage, regardless of how realistic the scenarios were. 

Meanwhile, TCP/IP was being designed by engineers who were actually using it to collaborate. They dealt with real world issues that were hindering them, and incorporated the solutions in the next revision of the standard proposal. And they were proposals rather than fully fledged standards. Frequently the actual standards never appeared and the previous proposals effectively formed the standard. It was a living set of protocols and still is to this day.

The over engineering I alluded to earlier was also a real problem with OSI: the standards were huge and complicated. Even the basic lower levels were orders of magnitude more complex than their TCP/IP equivalents. Whereas TCP/IP had a single network layer; Internet Protocol (IP); OSI had several options including the Connectionless Network Protocol (CLNP),  and the X.25 Packet Layer Protocol (PLP). At the transport layer, both had the option of connection oriented protocols. OSI had the imaginatively named “Transport Protocol”, which itself had five separate classes of variant named TP0-TP4 providing varying levels of error correction and features. Whereas, TCP/IP has TCP, which most closely resembles TP4. 

Another significant area of complexity was addressing. As you may know, the Internet uses fixed length addresses; 4 bytes originally, and 16 bytes for the IP version 6. 

The workhorses of The Internet, responsible for sending packets of data from one place to another, are called “routers” and they rely on these addresses for getting the packets to the right place. Routers need to be fast and efficient and so the simpler the addresses, the easier they are to process, and thus the faster the routers can do their job.

OSI addresses were known as Network Service Access Points, or NSAPs. Rather than being a fixed length, they could be anywhere from 8 to 20 bytes depending on the type of communication you wanted. This makes processing of the packets a great deal harder,  and more resistant to simple hardware optimizations. For example, in IP, the source and destination addresses are always 4 bytes long, and are always located at the same offset inside the packet; so anything that needs to process it can just look at the right offset and read the address in one go. With OSI, in order to process the address you need to examine the first few bytes, then look up the type of NSAP that’s being used, before you can even begin to read the rest of the address. This not only slows down processing, especially considering how comparatively slow the router hardware was at the time, but it also requires more memory and creates more room for bugs.

On top of these complexities, getting hold of the standards documents was far more expensive and cumbersome than with TCP/IP, which could just be freely downloaded. OSI consisted of dozens of formal ISO standard documents which, apart from anything else, weren’t cheap to buy. For some small organizations and businesses this made implementing the OSI stack far more difficult. Implementing TCP/IP, on the other hand, was free, simpler, and already running across the globe.

Despite the failure of OSI to become the single standard for global networking, parts of it still exist and are widely used in 2026, with very few users having any idea of their history. Here are a few examples of the ghosts of OSI that we still use today.

Every time you connect to a website, your computer establishes a secure connection with the web server you’re connecting to. As part of this exchange, the webserver presents your computer with a digital certificate containing the various details about the site and proof that it is who it says it is. Here is an extract from the certificate from www.google.com:

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            19:9c:e7:2e:c5:fe:cf:43:10:2e:cc:23:0e:0c:5a:13
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = US, O = Google Trust Services, CN = WR2
        Validity
            Not Before: Mar 16 08:39:57 2026 GMT
            Not After : Jun  8 08:39:56 2026 GMT
        Subject: CN = www.google.com
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:ee:5d:85:41:84:3b:20:56:9b:02:d9:42:75:71:
                    e5:e1:e0:b2:bb:8b:8d:11:03:1d:75:30:8d:4d:de:
                    68:fe:c3:b3:f0:a7:85:5d:08:e1:c9:03:df:66:13:
                    e2:cd:98:32:88:dd:9c:cb:5b:04:34:0c:d5:7d:73:
                    2d:cc:d1:d6:3e
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier: 
                7B:FE:6F:67:B3:A8:00:22:79:90:35:8D:84:36:5C:E7:42:0A:1E:A4
            X509v3 Authority Key Identifier: 
                DE:1B:1E:ED:79:15:D4:3E:37:24:C3:21:BB:EC:34:39:6D:42:B2:30
            Authority Information Access: 
                OCSP - URI:http://o.pki.goog/wr2
                CA Issuers - URI:http://i.pki.goog/wr2.crt
            X509v3 Subject Alternative Name: 
                DNS:www.google.com

These digital certificates are called X.509 certificates, and if you’ve ever set up your own domain name with HTTPS, you’ll have needed to obtain one for your server.

X.509 refers to an ISO standard that was part of the OSI application protocol named X.500.

X.500 was a vast, complex set of standards that together defined a hierarchical directory system which was intended to create a global directory of everything and everyone on the network. The Internet would be a very different place if X.500 had become reality: anonymity would be far more difficult, but proving who you were would be a natural part of using the network. Your identity within X.500 would consist of a hierarchical list of strings going from country down to name. For example:

C = US, O = Microsoft, OU = C Suite, CN = Bill Gates

If you look back at the google X509 extract above you’ll notice that it contains the declaration:

C = US, O = Google Trust Services, CN = WR2

Which represents the X.500 identity of the issuer of the certificate. 

A part of OSI which thankfully never took off is X.400, which was another technological leviathan that implemented e-mail. It used X.500 addressing so that your email address could have been something like:

C = US, O = Example University, OU = English Department, CN = Arnold Student

Rather than:

a.student@example.edu

The core of X.500 itself still lives on, albeit in a very cut-down form, called the “Lightweight Directory Access Protocol” (LDAP for short). LDAP kept the same principles as X.500 but in a far simpler and easier to implement way. As LDAP began to take off in various organizations, Microsoft took it and mutated it into what became Active Directory, which in turn became the de facto standard for corporate network management.

As well as X.509 surviving, one of its component standards is still widely used today, especially in telecommunications protocols: ASN.1 (AKA Abstract Syntax Notation). ASN.1 defines a binary format for efficiently representing complex data structure; X.509 certificates use ASN.1 for encoding all of their fields and data. Whenever you make a call over 5G, ASN.1 is used by the telecoms network as part of the protocol.

There are numerous other “living fossils” from OSI in wide use today, yet OSI itself is almost unknown. The fact that they still play a vital role in modern networking some 40 years later shows that at least they were robust. I can’t personally say I’m sorry th

at OSI became extinct. I once had to try and diagnose some issues with an OSI implementation and it was painful. Give me TCP/IP any day.

As a final thought, it’s quite ironic that the seven-layer OSI model is still used when teaching networking, yet what’s not taught is what happened when it was actually implemented.

[This was intended to be the voice over for a video, but not only is the process of making the video taking far longer than I expected but I think it’ll bore the arse off people. So here it is anyway.]


Reviving a beautiful relic

Sometimes, it’s nice to spend time on Ebay looking at vintage computer equipment. I couldn’t ever justify actually buying anything in that category, because spending money on increasing the amount of old junk I have in my office is rarely a good idea. So instead, I have found that drinking before Ebaying makes buying useless crap, purely because you like the look of it, much, much, easier.

The keyboard pictured above called out to me, and if you can’t understand why then I wouldn’t be at all surprised; just understand that to me, it looked fascinating and beautiful. Yes, I know there is a missing key, but to me that flaw makes it even more adorable. There was very little information regarding the system it came from, beyond the word “Qantel”, which I assumed was a misspelling of “Quantel“. It wasn’t. But I made it mine regardless.

A bit of searching revealed it to be the keyboard from an interesting looking terminal called the VT3, and, of course, someone had been kind enough to make the manual available online. Younger readers may be astonished to know that in the golden age of computers, even basic equipment tended to come with a manual, or set of manuals, that not only described how to use the equipment, but how to service it, and even how it worked! Imagine!

Obviously, as well as appreciating it as an objet d’art, I wanted to see if I could get it to actually work as a functional keyboard; keyboards of this vintage are built to last, and out-klack even the klackiest cherry-loaded custom mechanical gaming keyboard. The only problem is that it predates any keyboard standard that you’re likely to have in a modern OS; not just pre-USB but pre-PS/2 and pre-AT. Fortunately, the manual proved extremely informative.

The keyboard had a single cable coming out of it with a 7 pin Viking connector, the plastic sheath of which was starting to break. The simplest solution would have been to remove the connector and connect to the wires directly, but a part of me didn’t like the idea of losing that little piece of history – and who knows, maybe one day I’ll happen across the rest of a VT3 and get it working. That won’t happen.

The manual contained all of the schematics for every part of the VT3, including a numbered pin-out. The manufacturers of the connector were kind enough to etch the pin numbers on the face of the socket. As you can see from the picture, the numbering is peculiar, with pin 1 in the center and the rest of the numbers spiraling around it; also, pin 7 is not used. The connections were described as follows:

1 - DKEY+
2 - GND
3 - CLOCK+
4 - STROBE+
5 - +5V
6 - ALARM-

The description of these signals was a bit vague, but from what I could determine, the keyboard uses a type of synchronous serial interface. When a key is pressed, the STROBE+ line becomes active, then it is the job of the connected interface to produce a clock on the CLOCK+ line, and with each clock cycle the keyboard presents the next bit of the scan code on the DKEY+ line. ALARM- can be invoked to make the keyboard bleep.

For testing this out, I decided to use an Adafruit FT232H breakout board: the Swiss Army Knife of signal hackery. Apart from the fact that I had one to hand, it could also provide the 5V power that was needed for the keyboard, and is easily controlled with simple code. Wiring it up would also be trivial because the tolerance for 5V signals means it can be directly connected to the socket with DuPont wires.

To attach each DuPont wire to the keyboard connector, I removed the black plug from one end and replaced it with a slim heat-shrink tube so that they were thin enough to coexist happily when their pointy ends were plugged into the Viking connector. A bunch of electrical tape was then wrapped around the wires to keep them in place. As you can tell, I’m a real craftsman.

The other ends of the DuPont wires I randomly placed on the GPIO pins of the FT232H. For those that are playing along at home, the wiring turned out to be as follows:

1 - DKEY+   - White    D0 in
2 - GND     - Blue     GND
3 - CLOCK+  - Yellow   D7 out
4 - STROBE+ - Green    D2 in
5 - +5V     - Orange   5V
6 - ALARM-  - Brown    D5 out

The FT232H is a tremendously useful chip, but it’s proprietary, and the official libraries are closed source. This is not something I can deal with normally, but thankfully someone has written an open source, reverse-engineered, library for talking to it.

It’s important to note that the computer I’m trying to get this keyboard talking to is running Linux. No apologies for that. All of the code described below is available from github; there’s a link at the end.

Initially, I wrote a stupidly crude program to make sure that the keyboard was actually working as the manual suggested. It simply monitored the STROBE line watching for a state change, at which point it toggled the CLOCK line, effectively creating a clock train. Meanwhile the DKEY line was monitored for the key code. Here is a logic trace of the keys “a”, “s”, and “d” being pressed.

logic trace of A,S,D
The keys “a”, “s”, and “d”
Logic trace of the S key
Detail of the “s” key.

Every time the CLOCK line goes high, the state of the DKEY line is read. If it’s high, then the bit is set (1), low and it’s unset (0). From the trace above you can see that the bits of the code are 01001000. The order the bits arrive is LSB to MSB, but obviously when we write binary we start with the MSB on the left, to LSB on the right. In other words, we need to reverse the bits to get the key code: 00010010, or 18 in decimal.

The shift key doesn’t produce a separate key code, instead it causes bit 7 of the key code to be set when other keys are typed. For example, compare the trace for “s” above with SHIFT+”s”:

Logic trace of Shift and S.
Logic trace of Shift and “s”.

You can see that this time the code is the same as before but with the 8th bit also set, yielding a code of 10010010 – decimal 145 (i.e. 18 + 127).

You may notice that 18 is not the ASCII code for the letter “s”, in fact it’s not related to “s” in any standard coding. The key codes are completely arbitrary and derived from how the key-switches are connected together. Consequently, there needs to be a mapping from the keycodes to the actual keys they represent. So, I decided to manually map them by hitting every key and recording the scan codes for each. There was probably a map in the manual somewhere, but as I was mapping them to Linux Key codes anyway there would have been a lot of manual work regardless. It turned out that this process really didn’t take very long, and I ended up with a single file containing all the mapping: keymap.c.

Here is the first few lines as an example. In a nutshell we have an array containing all 127 possible keycodes, indexed by scancode. You’ll see that the 19th entry (i.e. index 18) is “KEY_S”. Those “KEY_” codes come from the Linux header file input-event-codes.h by the way. It wasn’t essential to use them, but it comes in handy further down.

#include "input-event-codes.h"
#include "keymap.h"


#define KEY_MAP_DEF(x) {x,#x}

mapping_t keymap[127] = {
                KEY_MAP_DEF(0),
                KEY_MAP_DEF(KEY_NUMERIC_7),
                KEY_MAP_DEF(KEY_NUMERIC_4),
                KEY_MAP_DEF(KEY_INSERT),
                KEY_MAP_DEF(0),
                KEY_MAP_DEF(KEY_CLEAR),
                KEY_MAP_DEF(0),
                KEY_MAP_DEF(/*KEY_TRANSMIT*/ 0),
                KEY_MAP_DEF(KEY_APOSTROPHE),
                KEY_MAP_DEF(KEY_SPACE),
                KEY_MAP_DEF(KEY_LEFTBRACE),
                KEY_MAP_DEF(KEY_ENTER),
                KEY_MAP_DEF(KEY_BACKSLASH),
                KEY_MAP_DEF(KEY_GRAVE),
                KEY_MAP_DEF(KEY_BACKSPACE),
                KEY_MAP_DEF(KEY_NUMERIC_1),
                KEY_MAP_DEF(KEY_A),
                KEY_MAP_DEF(KEY_C),
                KEY_MAP_DEF(KEY_S),

So, with this keymap it was now possible to make sure everything was working as expected. I made my little program print out the correct key code every time a key was pressed and, shockingly, it all seemed to work first time.

At this point, it would normally be time to switch out the FT232H for something that could mimic a USB keyboard interface, but then it occurred to me that we may not need to yet. Linux provides user-mode interfaces to the entire input event system, meaning that, in theory, we could make this keyboard available to the system by running a simple userland program. I’d never messed around with the input subsystem before, but it seemed like it could be a fun little learning experience.

The API for the input system (uinput) looked surprisingly simple, but Internet wisdom suggested that the best way to interact with it was via the libevdev library, which is designed to help end users avoid gotchas and bugs, rather than talking to /dev/uinput directly.

Using the library is surprisingly straightforward.

dev = libevdev_new();
libevdev_set_name(dev, "Qantel Keyboard");
libevdev_enable_event_type(dev, EV_KEY);
libevdev_enable_event_type(dev, EV_SYN);

You create a libevdev device by calling libevdev_new(), and then enable the events that you want it to receive. For a keyboard, all we really need is EV_KEY, which is sent on any key event (e.g key up, or key down), and EV_SYN, which tells the input system that we’re ready for it to process the previous EV_KEY events we’ve already sent.

Next is a slightly odd thing, we have to enable every possible event code that we’re going to send. So we loop through the keymap and enable each:

    for (int k = 0; k < 256; k++) {
        if (keymap[k].keycode) {
            int ret = libevdev_enable_event_code(dev, EV_KEY, keymap[k].keycode, NULL);
            if (ret != 0) {
                fprintf(stderr, "Error registering event type: %s\n", keymap[k].name);
                return 2;
            }
        }
    }

Next, we create a uinput device from our libevdevice

libevdev_uinput_create_from_device(dev, LIBEVDEV_UINPUT_OPEN_MANAGED, uidev);

This simplifies everything and manages the interactions with the uinput device for us.

Once this is in place, we can run the existing code to clock in the keystrokes, and for each one send corresponding events.

                libevdev_uinput_write_event(uidev, EV_KEY, key, 1);
                libevdev_uinput_write_event(uidev, EV_SYN, SYN_REPORT, 0);
                usleep(1000);
                libevdev_uinput_write_event(uidev, EV_KEY, key, 0);
                libevdev_uinput_write_event(uidev, EV_SYN, SYN_REPORT, 0);

This corresponds to sending a key down event, waiting 1ms and then sending a key up event.

This was enough to get it it acting as a functional system keyboard for my laptop. Here’s a little demonstration video:

The little bit of code is available on github for the curious. It should be noted that this is just a quick hack for testing the keyboard, and isn’t designed to be a keyboard driver. Unfortunately the FT232H doesn’t provide a good way to trigger interrupts so we’re polling it – and that is far from ideal.
Also, at present we’re not dealing with modifier keys (e.g. shift/ctrl). If anyone’s interested we can develop this further.