IrDA

32 min read Original article ↗

Light: it's the radiation we can see. The communications potential of light is obvious, and indeed, many of the earliest forms of long-distance communication relied on it: signal fires, semaphore, heliographs. You could say that we still make extensive use of light for communications today, in the form of fiber optics. Early on, some fiber users (such as AT&T) even preferred the term "lightguide," a nice analogy to the long-distance waveguides that Bell Laboratories had experimented with.

The comparison between lightguide and waveguide illuminates (heh) an important dichotomy in radiation-based communication. We make wide use of radio frequency in both free-space applications ("radio" as we think of it) and confined applications (like cable television). We also make wide use of light in confined fiber optic systems. That leaves us to wonder about the less-considered fourth option: free-space optical (FSO) communications, the use of modulated light without a confined guide.

Well, if I had written this two or three years ago, free-space optical might have counted as quite obscure. The idea of using a modulated laser or LED light source for communications over a distance is actually quite old. Commercial products for Ethernet-over-laser have been available since the late 1990s and achieved multi-gigabit speeds by 2010. Motivated mostly by Strategic Defense Initiative and Ballistic Missile Defense Organization requirements for hardened communications within satellite constellations, experiments on a gigabit laser satellite-to-ground link were underway in 1998, although the system ultimately only provided satisfactory performance at a rate of around 300 Mbps. As it turns out, FSO computer networking is nearly as old as computer networking itself, with a 1973 experimental system briefly put into use at Xerox PARC.

Despite the fact that FSO systems have been generally available and even quite functional for decades, they remained a niche technology with very little public profile until the phenomenon of low-orbit communications constellations (namely Starlink) put the concept of intra-satellite laser communication into the spotlight. Despite various experimental satellite-to-satellite systems dating back to the early 2000s, and more or less clandestine military applications over the same period, the first real production system is probably the EU's EDRS, which went live in 2016. Starlink didn't really get the laser technology working until 2022. That's one of the interesting things about FSO: it seems intuitively like it should work, it does work, but it's a technology that has often sat dormant for many years at a time.

Well, thinking about satellites, we all know that space is hard. There are formidable technical challenges around aiming and detecting lasers in space, and the rate of iteration is slowed by the long timelines of aerospace projects. But what about down here on earth? Where everything is so much easier? Well, we got Li-Fi. Li-Fi is a largely stillborn technology, and not the topic of this article, so I will resist the urge to explain it too thoroughly. As the name suggests, it's intended to provide a capability similar to Wi-Fi (short-range networking between multiple devices) but using visible light. Despite various demonstrations of gigabit or faster systems, Li-Fi has next to zero commercial adoption, with most uptake in military applications. There's just something about the military and light, which we'll get to later. But here's what I want to discuss today: the golden age of FSO communications, a brief period where the cutting-edge technology behind the television remote control appeared to be the future of short-range computer networking.

During the 1980s, Hewlett-Packard manufactured scientific and graphing calculators with a feature set that increasingly overlapped with personal computers. This era can be hard to understand for people around my age or younger, who associate graphing calculators exclusively with the few Texas Instruments models blessed (and demanded) by common high school math textbooks. These are a holdover, a specter, of earlier years in which scientific and graphing calculators were serious technical instruments and some of the most sophisticated computing devices available to many of their owners. Features like BASIC programmability, still widespread in graphing calculators but increasingly divorced from actual applications (besides ignoring the math teacher and writing primitive CRPGs), used to be an important part of business computing.

Engineers would obtain (even buy!) calculator BASIC programs that automated common calculations. Life insurance salespeople might quote rates using a calculator BASIC program. Calculator manufacturers often sold ROM cartridges that added domain-specific functions, and these modules now represent the many applications of the programmable calculator: financial modeling. chemical engineering. statistics. Not that many years later, the whole field was virtually wiped out by portable computers, but not before calculators and computers underwent an awkward near-convergence (best exemplified by the TI-95, a calculator with computer characteristics, and the TI-99, a computer with calculator characteristics).

The point is that calculators might fairly be called the first practical portable computers, and people increasingly used calculators as part of business and engineering workflows. The challenge here is that computer applications tend to involve storage and processing of large numbers of records, a task that the small (and often volatile) memory of calculators didn't encourage. A bookkeeper might use a calculator to total the day's transactions, an engineer might use a calculator to compute the capacity of a beam. Both of these tasks involve math, the forté of the calculator, but they also require documentation. The accountant and the engineer both need to record the results of their work for later review.

An interesting early approach by HP reflects the tradition of accounting: bookkeepers tended to use "adding machines" rather than calculators, a distinction that is mostly forgotten today but still apparent to anyone who buys an adding machine and finds it to be a little odd compared to a typical calculator. Besides the keypad layout (with an oversize dual-function +/= key), adding machines usually include a printer. Turn on the printer, total your transactions, and you now have a slip of paper that you can use to check your work, and even retain as part of your records.

These machines were big and bulky, though, and they still are today. What if you could have the convenience of a pocket scientific calculator and a printing adding machine in the same product family? Well, you could: by the end of the 1980s, many HP scientific and graphing calculators supported the 82240B accessory printer. It was even wireless.

HP calculators sent data to the 82240B printer using infrared light. For this purpose, HP developed a simple unidirectional protocol based on a UART hooked up to an LED (and, on the other end, a UART hooked up to a PIN diode). Called "RedEye," the calculator-printer application seems to have evolved over a short span into a more general-purpose, bidirectional protocol called HP SIR, for Serial InfraRed. As the name implies, SIR provided an interface very much like an RS-232 serial port, using a signaling scheme that was even fairly similar to RS-232, if you put the whole thing through an LED-to-photodiode step each way.

There were, naturally, a few adaptations to the nature of FSO communication. HP SIR was bidirectional but only half-duplex, because the realities of optics make it very difficult to build an IR transceiver whose receiving detector will not be completely blinded whenever the transmitting LED is active. Power was also a major concern: the infrared LEDs of the late '80s were not very efficient, and portable devices like calculators were expected to achieve a decent runtime on a few AAs. To cut down on power consumption, HP SIR replaced RS-232's bipolar non-return-to-zero signaling with a return-to-zero scheme in which the actual pulses (i.e. the periods when the LED is active) were much shorter than the bit interval, resulting in a low duty cycle. Since you can only really turn an LED on one way, HP SIR replaced bipolar signaling (e.g. positive for 1 and negative for 0) with a system in which the presence of a pulse in a bit interval indicated 0, and the absence of a pulse indicated 1.

If you paid much attention in your college data communications course, you might wonder about clock recovery when there are a lot of 1s in a row (and thus no pulses at all). We'll get to that later. I had a very painful data communications class that I am trying to forget, and I haven't quite braced myself to discuss line coding yet.

HP SIR was extended to numerous applications, including what we would definitely recognize as portable computers today. HP's "palmtop" computers, like the x00LX series, were just about the size of pocket calculators but ran a full-on DOS. These were a transitional stage between early portables and later PDAs, but they introduced a need that would only become bigger in the PDA era: a quick, convenient way of transferring data between the portable computer and other devices. HP SIR was the perfect answer. Start some software, point the palmtop at the desktop, and press send... infrared provided a surprisingly cost-effective way to implement these local connections without the need for cables. HP didn't forget the printers—this was HP, after all—and palmtops could wirelessly print to select HP printers by "point and shoot."

HP wasn't the only company with a short-range infrared protocol. Japan's Sharp had developed a similar protocol, also for calculators, that might not be much remembered in the United States except for its adoption on Apple's Newton series of PDAs. The Newton's awkward sibling, General Magic's Magic Cap, had a similar (but of course incompatible) infrared capability called MagicBeam. The early '90s brought the PDA and the PDA brought an obvious demand for a consumer-friendly, short-range wireless network protocol... and that's why we ended up with at least a half dozen of them, virtually all infrared-based. Everything from the relative cost of components to the regulatory landscape meant that infrared was easier and cheaper to productize than radio frequency protocols, so infrared was the direction that almost everyone went.

While just about everyone in consumer electronics eventually got involved, it seems to have been Hewlett-Packard that stepped up to drive standardization. I don't have conclusive evidence, but I think there's a fairly obvious reason: HP was one of several companies making portable devices, an area where they weren't unsuccessful but never enjoyed total market dominance. Printers, though... printers were a different matter. HP enjoyed clear leadership in printers from the late '80s and perhaps to our present day. People might own a PDA from one of many brands, but when it came time to print, they'd be pointing that PDA at an HP product.

In 1993, Hewlett-Packard hosted an industry meeting that kicked off the Infrared Data Association, or IrDA. As a group of HP employees recounted the event, it was a smash hit: there were far more attendees than expected, representing more than fifty companies in both consumer and industrial electronics. Within a few years, IrDA's membership grew to 150 companies—including IBM, Microsoft, and Apple. Commercial adoption was similarly impressive: in the late 1990s, IrDA transceivers were a ubiquitous feature of phones, printers, and computers. You may have never used IrDA, but if you were old enough in the 1990s, you almost certainly owned devices with IrDA support.

During the early meetings of IrDA, various candidates were considered before, unsurprisingly, HP SIR was selected as the basis of the new industry-standard protocol. IrDA 1.0 is essentially a rebrand of HP SIR, and "SIR" persisted as the common name for IrDA, although the "S" was changed to "Standard" or, as later versions introduced higher speeds, "Slow." Slow it was, at least by modern standards. IrDA ran at 115 kbps, but for the then-typical purpose of replacing an RS-232 serial connection, 115 kbps was plenty (the same as the maximum speed supported by common serial controllers at the time).

Early versions of IrDA suffered from a lack of standardization. Adopting HP SIR as the basis for IrDA brought in an already mature technology, a simple RS-232 based signaling scheme that was even easy to implement with UARTs. But that low-level standard was basically all IrDA standardized. The application layer, and even error detection and reliable delivery, were left to implementers. As you can imagine, everyone did things slightly differently and interoperability fell apart.

This is an old story in technology standards: you align on the low level, and then the next level up becomes the problem. Reliable interoperability ends up requiring standardized application protocols and, more than likely, peer and service discovery protocols. Well, guess what the IrDA spent the mid-'90s on?

The full IrDA protocol stack, mostly created over the first few years of IrDA's existence, can be a little bit confusing because of the way that it reflects the history. The first IrDA standards were limited to the physical layer, initially SIR, and the IrLAP Link Access Protocol. Besides some basic link control functions, IrLAP handles discovery. When a device wishes to initiate IrDA communications, it repeatedly transmits a random 32-bit ID. The frame timing of the beacon establishes the baseline for a time-slot-based access control mechanism; other IrDA devices that detect the beacons select a random time-slot (in between beacon transmissions) in which to respond with their own ID. This discovery process happens at low speed with small packets, but it also includes capability information on the devices used to negotiate the highest speed supported on both ends.

Once the IrLAP discovery process completes, two IrDA devices will know each other's IDs and have an agreed upon set of basic parameters for ongoing communications. You will note that I say two devices. One of the interesting things about FSO networking is that the nature of light, that most things are opaque to it, creates significant limitations. While IrDA always incorporated features to enable multi-point connections (such as the time-slot contention management procedure during discovery), the physical specifications for IrDA only accommodate connections of up to one meter with the two transceivers contained within a 30-degree-wide cone originating at each of the devices. In practice, IrDA devices had to be so close to each other, and so exactly oriented towards each other, that it was rarely practical for more than two devices to participate in an IrDA connection. This de facto limitation to point-to-point applications became solidified by later development on IrDA standards, which (for the most part) ignored the possibility of multi-point applications. HP application notes describe point-to-multi-point connections as possible but not yet implemented in the higher layers, and it pretty much stayed that way.

Let's take a look at the higher layers, because SIR and IrLAP alone did not specify enough functionality to meet real use-cases. Initially, IrLAP was used as a basic transport layer for various proprietary protocols, but IrDA quickly developed a few open standards that went higher up. IrLMP, the Link Management Protocol, is the most important.

IrLMP can be divided into two sub-layers, although they're less layers and more parallel features that operate at the same level. LM-IAS, Link Management Information Access Services, is a discovery protocol for high-level applications. LM-MUX, Link Management Multiplexing, does what it says on the tin: multiplexes multiple logical connections over a single IrLAP interface.

First, we'll discuss LM-MUX, because it will make LM-IAS clearer. By the time IrDA was in development, TCP/IP was gaining ground as the industry standard for computer networking, and IrDA employed similar ideas about layering and logical connections. LM-MUX specifies a simple frame header that includes the 32-bit addresses of the two IrDA devices and a seven-bit value known as a "selector" or LSAP-SEL. LSAP stands for Link Service Access Point, and LSAPs are the main abstraction for application connections over IrDA. The seven-bit selector is analogous to a TCP/IP port number, so we could compare an LSAP (consisting of a source address, destination address, and selector) to a TCP/IP address/port 4-tuple. If you're not familiar with this part of networking theory, it goes like this: in the world of TCP/IP, the combination of source and destination IP addresses and TCP port numbers uniquely identifies a TCP connection. Similarly, an IrDA connection is uniquely identified by the 3-tuple of source and destination addresses and selector. Like some of TCP/IP's competition, IrDA uses a single selector value for both sides of a connection. If you have read enough about networking to realize the implications of this fact, well, they are indeed true limitations of IrDA. In fact, as we will see later, there are some even odder limitations that emerge from the particular choices behind LSAPs.

Seven bits is not really that many bits, it only allows for 128 or so options. This made it impractical to statically assign selectors to applications, and IrDA never tried. Instead, IrDA relies on a port-mapping approach in which selectors are arbitrarily assigned to applications as needed. The mechanics rely on LM-IAS, so let's explain that.

LM-IAS is the exception to the rule, using a statically assigned selector (0). LM-IAS is based on a data structure (called an "information base" because this was networking in the '90s) consisting of a set of objects. Each object has a "class," identified by name, and an arbitrary number of key-value pairs called attributes. Both class names and attribute names are arbitrary strings, but IrDA encouraged (and to some extent mandated) a colon-separated hierarchical format. For both class names and attribute names, values starting with "IrDA:" were standardized while vendors were free to adopt their own prefixes for internal use (HP papers use the example of "Hewlett-Packard:").

The most important parts of LM-IAS were the "Device" class, which provided basic information on the device itself including a human-readable name, and the list of other classes which represented the applications that the IrDA device supported. For example, the class "Email" described the capability to transfer email messages, and contained a standardized attribute "IrDA:IrLMP:InstanceName" which contained a human-readable name for the email capability (useful if the device, for whatever reason, exposed more than one object of the Email class—which IrDA fully supported).

So, let's go back to the discovery scenario and add in LM-IAS. You point one device at another, it sends beacons, the second device responds, and a connection is established at the IrLAP layer. Now, IrLMP kicks in: an exchange of LM-IAS data allows the first device to present the user with a list of discovered devices (including their names) and a list of applications supported by those devices.

The attributes on LM-IAS objects can be anything, but one was particularly important and, in fact, required by the standard: IrDA:IrLMP:LSapSel (I do not know why it is capitalized this way!). That attribute provided the LSAP selector to be used to communicate with that specific application.

Well, that all sounds simple enough, but let's complicate things further. IrLMP went further than IrLAP alone, but still curiously lacked one of the properties we would expect: flow control. It's actually a little odder than that, IrLMP did provide basic flow control in an exclusive mode with only one logical connection, but when multiplexing was in use, it didn't attempt to address the many flow control problems that emerge with multiple connections (deadlocks, contention, etc). We need another protocol!

At this point, you can tell that internet influence is becoming significant, as IrDA introduced the "Tiny Transport Protocol" or TinyTP. TinyTP is, as the name suggests, much more comparable to TCP in the internet stack. TinyTP added flow control (at the individual connection level) and a robust mechanism for segmenting large payloads, a problem that IrLMP left as an exercise for applications.

What makes TinyTP confusing is its relation to the other protocols. TinyTP is mostly a layer on top of IrLAP, but not quite. TinyTP relies on the exact same selectors as IrLMP, LSAPs, and it relies on LM-IAS for devices to negotiate which applications will run on which LSAPs. We can add a new standard attribute for LM-IAS objects: IrDA:TinyTP:LSapSel, which indicates that a service is available over TinyTP, and the LSAP selector to use. The result is that TinyTP and IrLMP are two parallel alternatives, providing similar interfaces and running over the same lower layer, but TinyTP also relies on IrLMP for initial connection setup.

Now, let's loop back to the implications of LSAPs. In the TCP/IP world, a connection is uniquely identified by the 4-tuple of addresses and ports. This means that one host can open two connections to the same service on another host, differentiated by the choices of source ports (which are ephemeral ports in the TCP/IP design). With IrDA, connections are identified by a 3-tuple, which means that you actually can't do this. A given host can only have one connection to a given service on another host, because there's nothing in the headers to differentiate two connections with the same LSAP selector. This is mostly just of academic interest, since the protocols defined on top of IrDA were designed with this in mind, but it's always interesting to see these differences between network architectures. So, here's another: since IrLMP and TinyTP use the same LSAP selector format, and run under IrLAP with the same addresses, you cannot differentiate between IrLMP and TinyTP connections to the same service. Once again, not a big problem in practice because everyone knew about this limitation and would not attempt to connect to the same service with both protocols, but it's still interesting that you can't. If we compared the difference between IrLMP and TinyTP to that between UDP and TCP (which is an imprecise but still useful comparison), the difference stands out, because UDP and TCP connections are differentiated in several different ways. In practice, IrDA applications addressed the problem by assigning different LSAP selectors to the same application for TinyTP and IrLMP, for those applications that supported both.

IrLMP and TinyTP provided a pretty robust capability that met most needs for basic network connections, especially since TinyTP was similar enough to TCP/IP in its semantics that TinyTP connections could be treated as Berkeley sockets. IrDA applications could thus be written a lot like internet applications, using some of the same libraries and techniques. Of course, mentioning this must make you wonder: TCP/IP over IrDA? Yes, that's an application!

But first, we'll revisit the physical layer, because IrDA received several revisions of its bottom layer. SIR, the 115Kbps mode taken directly from HP, gave way with IrDA 1.1 to MIR, the Medium speed Infrared physical protocol. MIR operated at 1 Mbps, and it's particularly interesting to me because it reflects one of the major trends of IrDA development. IrDA 1.0 was published in 1994, basically as a formalization of existing HP SIR devices (which generally became IrDA devices with just software changes). IrDA 1.1 was published in 1995, just a year later, but is far more reflective of IBM than HP.

In 1995, IBM's ThinkPad line of portable computers was massively successful and basically defined the "business laptop." ThinkPads sold like hotcakes and found widespread use in the kind of applications that used to have people reaching for programmable calculators... and many more. But wireless networking was still a problem, and Wi-Fi wouldn't achieve widespread adoption for several more years. IBM leaned heavily into IrDA, and for many years it was a feature of every ThinkPad model.

MIR was fairly similar to SIR in terms of line coding, but faster. To address clock synchronization problems at the higher speeds, MIR used a bit-stuffing scheme taken from HDLC, part of the ISO network stack that was directly derived from SDLC, which was the data link protocol for IBM's SNA network stack. So, just as TCP/IP took over, IrDA went the SNA/ISO path, at least in a small way.

Subsequent revisions of IrDA introduced FIR (Fast Infrared) at 4 Mbps (1998), and VFIR (Very Fast Infrared) at 16 Mbps in 2001. By the time IrDA got to 16 Mbps, the "basically RS-232 over an LED" scheme had been replaced with a more sophisticated non-return-to-zero run-length-limited line coding, much more similar to what radio protocols used (which were both more sophisticated and more common by 2001). The fact that radio modulation methods could be applied to IR meant that the sky was the limit, and subsequent work introduced a 96 Mbps protocol (UFIR) and a 0.5 or 1Gbps protocol called GigaIR. Unfortunately, GigaIR came much too late in the limited lifespan of IrDA. As far as I can tell, it never made it to any widely available products. Even 16 Mbps VFIR is rare in practice, so for most purposes we can consider 4 Mbps the fastest speed achieved by IrDA.

Aided by the simple and internet-like interface of IrLMP and TinyTP, IrDA was applied to quite a few tasks. One of the more common application protocols used over IrLMP was IrCOMM. I suppose that IrCOMM stands for Infrared Communications, but that doesn't mean that much, does it? IrCOM might be better, because IrCOMM provided emulation of traditional serial and parallel ports over IrDA. It sounds silly to run a serial port emulation protocol over a network protocol over a data link protocol over a physical layer that was originally designed to emulate a serial port, but the scope of IrCOMM's support for physical serial ports goes beyond just providing a character-oriented communications channel. IrCOMM provides a set of control messages and behavioral standards to replicate the full feature set of RS-232, including the control signals. It also provides integrity and reliable delivery, so that traditional serial port applications will reliably work over the unreliable IR link. IrDA as a drop-in replacement for serial ports proved popular, especially in industrial diagnostics and programming. A number of early digital cameras also supported IrDA for transferring images to computers or printers (or, in Japan, sending images over select IrDA-equipped payphones, because Japan is like that)—and they used IrCOMM to convey the same vendor-specific protocols they had used over serial cables.

Considering IrDA's history, it's no surprise that printing was another popular application. There were a few ways that an IrDA device could send a job to the printer (fragmentation remained a problem, to some extent, for the entire lifespan of the protocol), but the Cadillac option was IrJetSend. JetSend is a topic of its own, a surprisingly complicated and highly generalized protocol that could probably be used for just about anything. But HP developed it, so it was used for printers and scanners. IrJetSend let the printer describe complex user interfaces to the client, so you could print from your PDA with access to all of the capabilities of your workgroup printer. Living the dream.

My personal favorite IrDA application protocol is OBEX, and it's also the apex of IrDA's goal of interoperability. Many descriptions of OBEX compare it to HTTP, which is fair, and there are several tells (besides timeline) that suggest that OBEX's designers had the world wide web on their minds. OBEX operates over TinyTP, and once establishing a TinyTP connection to the OBEX object from the LM-IAS, an OBEX client sends a "CONNECT" message that includes a service name to indicate what the client wants to do.

Like HTTP, OBEX was designed for very general document-moving purposes and can thus be used for a lot of different purposes. If we stretch the HTTP analogy, we could say that the service name provided in the CONNECT message is a bit like the Content-Type header, as it tells the server what kind of thing the client intends to interact with. This analogy isn't great, because different OBEX services are likely to be handled by different applications. Well, let's just consider examples. One OBEX service is "file transfer," which is just for generic file copy operations. Others are IrMC, which performs offline PIM synchronization in vCard-family formats, and SyncML, which performs offline PIM synchronization in SyncML (an XML format). If you have no idea what I'm talking about when I say "offline PIM synchronization," well, I should probably write an article about it. In the meantime, look up "Microsoft ActiveSync" and try to imagine that it is 2003 and you own a PDA.

Once an OBEX connection is established, it proceeds using the verbs GET and PUT in basically the same way (and for the same purposes) as HTTP. You can get files, you can put files. The only real difference between the behavior of HTTP and OBEX here, besides that OBEX is vastly simpler, is that OBEX is stateful about the working directory (similar to FTP) so it has a SETPATH verb for changing the working directory.

So I guess OBEX is more like FTP? But it has headers like HTTP and HTTP verbs. I don't know, pick your favorite comparison.

Here's where all this matters: one of the showcase applications of IrDA is that you could send things between phones, like AirDrop or something. You flip open your phone (because this is an era in which it does indeed flip open), push a couple of buttons, point it at your new acquaintance's phone, and it is as if you provided a business card. You aren't just using infrared networking, you are Networking using infrared. Similarly, you can send files, but considering IrDA speeds you probably want to keep them slim. Fortunately the most popular use-case here was sharing photos, and any camera on a phone with IrDA was probably just getting into the megapixels.

In the early days of IrDA, this didn't actually work reliably, because of different application-layer implementations (such as IrMC vs. SyncML). When it did start working reliably, it's because your respective Nokia phones were exchanging vCard files using OBEX over TinyTP over IRLAP over FIR, the IrDA stack in its full glory.

Wait—stop—hold on. When I say that you can, you know, send a file to your friend's phone... maybe your favorite MP3... what would you call that? If you are the same kind of person as me, you will say "squirting," of course. You squirt a file to someone.

I want to squirt you a picture of my kids. You want to squirt me back a video of your vacation. That's a software experience.

Steve Ballmer said that, in 2006, about a feature of the Zune. Using Wi-Fi Direct, two Zunes could send files to each other. It's a little odd that Ballmer doesn't mention using this to send music, but keep in mind that Microsoft had to tread very carefully to avoid the attention of the RIAA. The point is that it was a Wi-Fi version of IrDA's OBEX file sharing, and Microsoft was widely mocked for calling it "squirting." The funny thing is that I think this is a little overplayed, the Zune didn't actually it "squirting" in the UI and I don't think that term was even used in documentation. But Steve Ballmer used it, as did other Microsoft employees, so it seems to have been the term they used internally.

The other funny thing is that they didn't come up with it.

Squirting was already the accepted term for this feature by the time the Zune came around. It's generally assumed that one-shot file transfers came to be called "squirts" because "squirt" is used in a similar way for brief radio signals (back to at least WWII). I can't promise you that I have found when this happened, but I have a good contender: HP's CoolTown research project, which ran from 1999 to 2000 or so and developed an IrDA protocol they called "e-Squirt." By 2000, papers about IrDA referred to file sharing as squirting. I have no doubt that the Zune team would have picked up the term from there, given Microsoft's extensive involvement in IrDA.


I put a lot of time into writing this, and I hope that you enjoy reading it. If you can spare a few dollars, consider supporting me on ko-fi. You'll receive an occasional extra, subscribers-only post, and defray the costs of providing artisanal, hand-built world wide web directly from Albuquerque, New Mexico.


Well, I didn't expect the tangent about squirting, but we had to do it. Back to our main program: whatever happened to IrDA?

It's frustrating, because IrDA had a lot of potential. The work on GigaIR, even though unrealized, showed that IrDA doesn't have to be slow. IBM invested a lot of time and verbiage in IrDA AIR, or Advanced Infrared, which circled back on the whole "point to multi-point is possible but unimplemented" thing. AIR was an overhaul of the whole IrDA stack that made networks of three or more devices possible although, as it turned out, not necessarily practical. AIR was supported by a lot of IBM products but never really used for anything, because by that point devices were getting Bluetooth and Wi-Fi.

That's about it: IrDA just got wiped out by Bluetooth and Wi-Fi. Wi-Fi meant that "thick" mobile devices like PDAs would just connect to a network, wiping out the whole synchronization scenario. For more transient purposes like file sharing, Bluetooth promised more convenience, even if I'm not sure it delivered it. IrDA did require that the two devices were in direct sight of each other, which imposed design constraints on mobile devices and implied that you would hold them pointed at each other the whole time.

By the mid-2000s, things just got worse. Integration of Bluetooth and Wi-Fi modules meant that Bluetooth was "free" in a lot of mobile devices, which is even cheaper than the IrDA components. IrDA mostly dealt with lighting well, but not direct sunlight, and smartphones probably made that scenario pretty frequent.

IrDA published IrSimple, a comprehensive set of improvements to IrDA, in 2005. It was mostly a Japanese effort, because Japan being the way it is, IrDA was more widely used there. Infrared was disappearing from mobile devices when the iPhone killed IrDA entirely. Without mobile devices, IrDA was a solution without a problem. The technology, the standard, and the Infrared Data Association itself all faded into obscurity.

This isn't to say that IrDA is gone. It's probably actually pretty widely used, as far as short-range wireless protocols go. There are several common embedded applications of IrDA and IrDA-derived protocols, ranging from power meters to laundry machine diagnostics. These applications benefit from IrDA's low cost: modern documents on IrDA often call out that it can be implemented with bit-banged GPIO or unused UARTs and very few additional components. They also value that IrDA is easily compatible with waterproofing and that it doesn't provoke the regulatory requirements associated with RF.

There's another upside to IrDA, as well: security. IrDA isn't quite visible light, but infrared behaves similarly. Most materials are opaque. If you can seal a room against light, you can seal a room against IrDA—and that's a lot easier than reliably blocking RF. There are probably some enduring applications of IrDA because it's permitted in some areas where RF communications are not, due to concern of eavesdropping or malicious interference.

Okay, you've made it this far, and I did not expect this article to be this long. Let's have a little dessert. As we've seen with Li-Fi, there's interest in FSO communications as a way to connect portable devices to IP networks. IrDA sure has a lot of the ingredients, it just wasn't built for IP.

Well, don't worry, there was a solution to that: IrLAN. IrLAN implemented IP over IrDA with a surprisingly Wi-Fi-like architecture, including support for multiple clients. That part is particularly interesting: IrLAN supports an "access point" mode with one AP communicating with multiple clients. You could, in theory, use it as a direct alternative to Wi-Fi. It seems like HP even built a device for this, the HP NetBeamer, although it is so obscure that I suspect that it only ever existed as a prototype.

But, remember, point-to-multi-point was unimplemented in IrDA. Well, except for AIR, which saw little adoption. How to square the circle?

It's strange, I'm honestly a little confused by it, but the original IrLAN specification has this odd sentence buried in a definition in the glossary:

IrLMP is multi-point-capable even though IrLAP is not. When IrLAP becomes multipoint-capable, multiple machines will be able to communicate concurrently over an infrared link.

Well, it's true that nothing about IrLMP really prevented point-to-multipoint. But just saying that IrLAP didn't support it is rather understating the problem, AIR had to make changes to the physical layer to get feasible multipoint support (and AIR had a bad reputation for performance as a result). Later in the specification, we read that "it is quite reasonable to expect future implementation of access point devices to support multiple concurrent clients connecting to the LAN."

So, it turns out, the authors of the IrLAN spec defined support for multiple clients on the assumption that it would become possible, through some effort like IBM's AIR... and then that just didn't happen.

Honestly, IrLAN doesn't seem to have gone anywhere. Even at the time, many reportedly thought it was a bad idea. Much better to just use PPP, designed for a serial channel with the exact behavior provided by IrCOMM. It's nice to know that at least the protocol side of an IrDA-based alternate Li-Fi was developed. I hope to one day find an HP NetBeamer. I want to pick up some IrDA devices and experiment with OBEX. We could send some files around at 4 Mbps. With a quiet IR background, later-generation IrDA transceivers apparently worked over impressive ranges. AIR targeted five to ten meters. We could build a LAN. We could squirt. That's a software experience.