The Vacuum Transistor: A Device Made of Nothing
spectrum.ieee.orgI'm surprised the IEEE editors allowed the word "Introducing" in the title. Vacuum FETs are often a target of the diamond thin-film community. (Diamond has a very interesting electronegativity: Very little energy is required to get an electron out of diamond and into the "vacuum" (usually air.))
The problem is with carbon in general is it has a band gap 5x larger then Silicon. Which overall (but not entirely) means if you want to build a transistor with Diamond (or graphene) you need to put 5x as much voltage into it.
The reason graphene is great for transistors is with its higher band energy its 5x harder for it to soft set itself. So if we pretend todays 11nm transistors have a 1% chance of electron tunneling, carbon would have a 0.2%.
The problem is that same switch would take 5x as much power to switch. Which means a modern 220w cpu would now need 1100watts of power :x
You started by saying 5x voltage, switched to 5x power, and ended with 5x energy.
Those are not the same thing. You can have 5x voltage without changing the power or the energy.
You can have 5x the voltage without changing the power consumption if you reduce the capacitance. If you keep the transistors at the same size, a computer working with 5x mode voltage will use 5x more power if operating at the same frequency. (But you can probably make it run faster...)
That said, except for very few embedded applications, computers today operate on a voltage that's much bigger than the silicon band gap. I'm not convinced it'd be a problem.
Actually, all else being constant, 5x voltage yields 25x power:
P = CV^2 * f (multiply by activity factor if pedantic; assuming all transistors toggle every cycle here.)
And, of course, you are right.
The silicon band gap is ~1.1V at room temperature. Mainstream modern processor core voltages are around 1.2V (maybe less for the very latest parts) and "ultra low voltage" processors operate as low as 0.7V.
>The problem is with carbon in general is it has a band gap
The band gap is not a function of the element alone, but mainly of the crystal structure. Different allotropes of carbon have vastly different band structures.
Diamond has a high band gap, as you mentioned. But Graphene has none, and that is one of the major obstacles of the material.
The problem with having none is that makes your transistors even more vulnerable to quantum effects.
That's not the main problem. The main problem of not having a bandgap is that you don't have a transistor...
Sshhh.. It's not allowed to talk in that way about the graphene "transistor". You may upset certain funding sources.
But you know, it is really fast! Just almost useless transconductance and no off state, but we can fix it later!
But bilayer graphene has an induced bandgap!! What? The mobility goes to shit as soon as you touch it with a substrate? Well that's not a problem, we'll design vacuum suspended GFETs, but I'll get right back to you when I figure out how to put an oxide around that sucker
> The problem is with carbon in general is it has a band gap 5x larger then Silicon. Which overall (but not entirely) means if you want to build a transistor with Diamond (or graphene) you need to put 5x as much voltage into it.
I should have said electron affinity rather than electronegativity: You're not trying to get electrons from the valence band into the vacuum, just from the conduction band to the vacuum. Diamond actually has a negative electron affinity, which is why the vacuum electronic folks were excited about it.
Even if the technology is old, the article itself is very much an introductory one. That may be what they meant.
I don't understand - why not "introducing" - it seemed to be an introductory level article on the topic of "vacuum transistors"?
I guess that was the correct interpretation and why the IEEE editors allowed it. They did mention that they did not introduce the architecture (which is what it sounded like originally) about midway through the article.
Ok, we'll at least take it out of the title here.
I'm no scientographer, but it seems to me that a dependence on helium may be a stumbling point for getting cheap vacuum transistors to the market. The ongoing helium shortage[1] is driving the price of helium up which could make helium-based vacuum transistors expensive, limiting widespread adoption. It could be that high-speed helium vacuum transistors become a speciality product for those projects that feel the need justifies the additional cost.
[1] http://www.decodedscience.com/helium-shortage-situation-upda...
Intuitively, this makes sense at human scales.
But microprocessors are tiny. The i7 4770, which is in my machine right now, has a die size of 177mm^2. The die has a thickness of 775um, for a volume of ~137mm^3.
Even if you made the whole processor out of helium, you could make 100,000 of them out of a single 14 liter party balloon.
The Zeppelin NT, which holds 8225m^3 of helium, contains enough for 60 billion high end processors.
Well, let's see, a modern chips die size is on the order of 300mm^2. Let's be generous (I think) and call it 5mm thick. A single helium canister can contain around 300 cubic feet of helium [1]. Those are nasty units to work with by hand, but https://www.google.com/search?q=%28300+cubic+feet%29+%2F+%28... suggests one such cylinder would be enough for 5.5 million new chips, assuming the entire chip was just helium. You can probably safely add at least two more entire factors of magnitude for the fact the chip will still mostly be silicon (or something), my gut suggests 3 or 4 is probably even closer.
Oh, sure, there will be losses and such, but this is still a trivial expense next to the billions of dollars of fab work that will be required. In these quantities we literally use gold without hardly a second thought for price.
[1]: http://www.praxairdirect.com/Product2_10152_10051_14626_-1_1...
300mm^2 * 5mm = 1.5 milliliters
24k Gold is 42$/gram, 1 gram is 0.052 milliliters.
So, 42 / .052 * 1.5 = 1,211$.
Note: The important parts are not 5mm thick etc, but gold is rather expensive by volume.
Sorry, by "these quantities" I meant "at computer-chip quantities" in general; gold we pretty much use by area, and not much of that, either. The numbers I gave were purposely quite generous for volume. After all, I did say they were probably 3 or 4 orders of magnitude too generous. Per reitzensteinm's post, looks like you can recover another 1.5 or so out of my chip-height estimate, too.
If I understand the basic premise, then other mostly stable and abundant gases (like Nitrogen, or even air) could probably be used as well, if the transistor size and voltage are small enough. (That's wholly conjecture, though. I also am not a sciencer).
> That is, you don’t, in fact, need to maintain any sort of vacuum at all for what is nominally a miniaturized piece of “vacuum” electronics!
According to the article, if the device is small enough it doesn't require helium or a vacuum.
"And we’ll have to devise proper packaging methods for these 1-atmosphere, helium-filled devices."
Actually it does require helium, but no vacuum.
"For example, the mean free path of electrons in air under normal atmospheric pressure is about 200 nanometers, which on the scale of today’s transistors is pretty large."
It doesn't require helium; helium is just a bit better.
And vacuum, at least comparatively soft vacuum, isn't a big deal either. I've got functional vacuum tubes around here older than I am.
Regarding the opening anecdote - some have suggested the Soviets used vacuum tubes so that their planes would survive the electromagnetic pulse from a nuclear explosion.
> Regarding the opening anecdote - some have suggested the Soviets used vacuum tubes so that their planes would survive the electromagnetic pulse from a nuclear explosion.
I don't think the authors delved too hard into the history. I interviewed at General Dynamics in Ft Worth around 1986: They told us that they they just designed out the last of the tubes on the F16 and were working to remove the last tubes from the F4U Phantom.
Nobody mentioned that the tubes were around as a countermeasure to EMP; it was more about waiting for technologies to mature to the point that you'd believe they were battle-tested enough for your most advanced weaponry.
Edit: The above anecdote in reference to the article claiming
> By the mid-1970s, the only vacuum tubes you could find in Western electronics were hidden away in certain kinds of specialized equipment
DISCLAIMER: This is what I remember once hearing in an ECE class. I was hesitant to post this comment given that it might be garbage, but perhaps someone can help confirm or disprove this.
I once heard that one of the reasons the Soviets continued to use analog systems was that they were "faster, more compact, and more power efficient" [than a digital computer]. This came at the cost of flexibility. For example, an op-amp allegedly can do integration faster and with less power than a digital computer, but the IC can't be reprogrammed. Digital computers have huge benefits, but ones that come at a cost.
Thoughts/input anyone? As I said, I might be completely off base so please nobody take that as anything more than "food for though".
At 1980s levels of integration, then I'd agree that for many sorts of signal processing it's easier to do it in analog than digital. Especially if all your engineers are trained for analog. In 2014 the situation is the other way round.
Robustness of power electronics is another consideration: tubes are mechanically fragile but not vulnerable to ESD, whereas FETs are, especially during assembly. If their factory process control was poor it would have been easier to stick with the tubes.
I think the #1 reason was military-logistical: Easy to manufacture lots of them from domestic factories, stockpile at airfields, and replace by hand. Some sources suggest resistance to temperature-fluctuations and EMP were also factors.
Which the end of the article mentioned.
They mentioned it at the end of the article which was disappointing since I'm sure many readers didn't get that far.
Depending on the intensity, an EMP can fry tubes quite well. And when weaker, bipolar transistors (the kind most people were using by then) have no problem surviving them.
There's a middle ground where an EMP would be strong enough to fry a bipolar transistor, but weak enough to fry a tube. I don't know how significative that is for military strategy, but the automatic answer of "transistors can't handle EMP" isn't completely right.
How does one speed-test a device that switches significantly faster than the available electronics?
It is not even the fastest. This article did well to leave out actually fast transistors, only comparing to graphene transistors which most people don't really consider fast in the first place.
Checkout Indium Phosphide HEMTs (High electron mobility transistors): http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4419013
Mark Rodwell's group in UCSB (http://www.ece.ucsb.edu/Faculty/rodwell/rodwell_info/rodwell...) has been working on these transistors for a while. I think they're pushing 2 THz currently.
drive a periodic waveform and sample repeatedly. Assuming you don't somehow sample the same point in the periodic waveform each time, you'll eventually get the complete waveform.
> Assuming you don't somehow sample the same point in the periodic waveform each time, you'll eventually get the complete waveform.
To add an example of a real technique: You can use a short laser pulse and change the time-of-flight (mirrored path on a stepper motor, for instance). This technique will get you to the terahertz region, which is pretty much state of the art for where electronic devices still have gain.
http://en.wikipedia.org/wiki/Terahertz_time-domain_spectrosc...
Oh that is pretty cool. I wonder if they have considered them as power switching devices. Something like that which had an effective Rds of nano-ohms could make electric cars more efficient.
> * I wonder if they have considered them as power switching devices.*
It's been a couple of decades since I last looked: The diamond thin-film guys thought that power would be a sensible application. There is a technological hurdle: If you notice in the diagram, the electrodes are pointed. This increases the field at the tip, in order to overcome the electric-potential barrier to emission of the electrons. Reliability issues arise because the emission is occurring over a smaller surface area.
Like I said, it's been a while since I last looked, but vacuum devices have had appeal for power apps for a long time.
Yeah, I'm skeptical about plans to build whole integrated circuits out of them, but these high frequency, low-loss devices could be killer for a lot power electronics applications— motor controllers, DCDCs, etc.
Not sure about high power, but the physical requirement for small size scale combined with the engineering goal of high current is usually contradictory.
I do absolutely think this could be awesome for lower power ultra tiny DC-DC converters though. For instance:
http://hexus.net/tech/news/psu/64161-finsix-laptop-power-sup...
is pushing to high enough frequencies so as to not need an inductor at all. Problem with high frequencies is that usually the efficiency drops, so there's a tradeoff. But if you can avoid the switching losses by moving to a transistor with higher operating frequencies, it might be quite good =)
Or the tiny size might work well for letting you do cool stuff like on-chip DC-DC conversion where you don't need an inductor because it's all so fast...
Would a guitar amp built with these sound like a tube amp?
> Would a guitar amp built with these sound like a tube amp?
It would be nice, right? Most folks worry about getting vacuum electronics to simply work, so I couldn't dig up anything on the nonlinearities of a vacuum transistor amplifier. (For those who don't follow such things: The nonlinearities of guitar amps are intimately related to how it sounds. Many musicians still use tube amps because the world has become accustomed to that sound. It is the gold standard of distorted amplifiers to some of us.)
That said I'm not optimistic. A quick check of a Fender Twin Reverb schematic (http://support.fender.com/schematics/guitar_amplifiers/65_Tw...) shows that that the final amp has pentodes, different than the triode that the OP's article is talking about, and those pentodes have separated heaters for the cathode. So the temperature of the electrons coming off the cathode are going to be much hotter. (Another name for the monolithic vacuum devices used to be "cold cathode", because it acted like a thermionic emitter but without a heater.)
There's a lot that's different. Of course, the only way to know for sure is to plug it in and crank it up to 11.
I'd bet computers will be emulating the sound of vacuum tubes way before any of those things get into market.
You can get the correct vacuum tube-like amplification from a computer today, it's just that tubes are still cheaper. A/D converters and first stage linear amplifiers are only getting cheaper (even at this, post Moore's law era), thus it's only a matter of time before computers retire valves on yet another application.
EDIT: Also, tunnel devices have a completely different behavior from macroscopic valves. They are very non-linear, what makes them great at digital applications, but horrible sound amplifiers.
> You can get the correct vacuum tube-like amplification from a computer today, it's just that tubes are still cheaper.
It is absolutely not the case that tube amplifiers are still sold because they're cheaper than solid state amplifiers + DSP. Their sounds is preferred and they are much, much more expensive to produce.
The valves (which don't enjoy the economies of scale they once did) are only part of the cost. The power supply of a tube amp is usually more complex than a solid state amp and they generally need output transformers because of high output impedance -- transformers that are linear through the audible band are expensive to produce.
At scale you could easily build a SOTA DSP card suitable for emulating "tube sound" for less than the cost of a single channel output transformer. Line 6, among others, have built businesses based on that fact.
They already do. Check out Line6.com
Won't prevent audiophiles from being audiophiles. I remember reading a screed against double-blind testing on some magazine (would link to, but can't find it)
Audiophiles are funny.
I had a huge problem of making an D/A converter with enough precision for instrumentation a while back, but designing some digital I/O with enough bandwidth for it was a nightmare. Then I looked at the Internet and saw an audiophile complaining that a soundcard[1] with 20dB less noise than my design was crap.
1 - A PCI express card, of course. Didn't try that bus. I'd have a really bad time manualy creating a board for it.
I don't think these are intended for power-amplification purposes.
I thing it gets digitized at some point, so it would probably lose the analog sound of a tube amp.