Some of the error messages produced by Apple's MPW C compiler (2006)
cs.cmu.eduI miss this kind of playfulness in computing.
When I was at Amazon my manager told me that several years earlier he was responsible for updating the 404 page so he scanned a picture of a cat his daughter drew and made that the body of the page. In 2009 when I started, that was still the image, but at some point someone must have noticed and replaced it with a stock photo of a dog. The asset was still called kayli-kitty.jpg, though. It’s since been changed again to rotating pictures and references to the original are gone.
This is really cool! The filename on certain Amazon 404 pages (eg. https://www.amazon.co.jp/404) is still kailey-kitty.gif (but the image has been replaced with a standard icon).
I also found this comment from him on a blog: https://www.davebellous.com/2006/09/25/what-the/#comment-290...
> The compiler is 324k in size
Playfulness isn't the only thing we've lost. Software bloat has reached comedic levels.
Your optimizing compiler today will actually optimize. LLVM was recently ported to the 6502 (yes, really) [1]. An example:
That is compiled to this:void outchar (char c) { c = c | 0x80; asm volatile ("jsr $fbfd\n" : : "a" (c): "a"); } void outstr (char* str) { while (*str != 0) outchar(*str++); } void main () { outstr("Hello, world!\n"); }
Unrolled loop, over a function applied to a constant string at compile time. An assembler programmer couldn't do better. It is the fastest way to output that string so long as you rely on the ROM routine at $fbfd. (Apple II, for the curious.) Such an optimizing transform is unremarkable today. But stuff like that was cutting edge in the 90s.lda #$c8 ; ASCII H | 0x80 jsr $fbfd lda #$e5 ; ASCII e | 0x80 jsr $fbfd ...I understand your point, but LLVM-MOS is a bad example. You gain LLVM’s language optimizations, as you point out. But LLVM’s assumed architecture is so different from the 6502 that lowering the code to assembly introduces many superfluous instructions. (As an example, the 6502 has one general purpose register, but LLVM works best with many registers. So LLVM-MOS creates 16 virtual registers in the first page of memory and then generates instructions to move them into the main register as they are used.) It’s of course possible to further optimize this, but the LLVM-MOS project isn’t that mature yet. So assembly programmers can still very much do better.
So LLVM-MOS creates 16 virtual registers in the first page of memory and then generates instructions to move them into the main register as they are used.
Isn’t this actually good practice on the 6502? The processor treats the first page of memory (called the zero page) differently. Instructions that address the zero page are shorter because they leave out the most significant byte. Addressing any other page requires that extra byte for the MSB.
Furthermore, instructions which accept a zero page address typically complete one cycle faster than absolute addressed instructions, and typically only one cycle slower than immediate addressed instructions.
So if you can keep as much of your memory accesses within the zero page as possible, your code will run a lot faster. It would seem to me that treating the zero page as a table of virtual registers is a great way to do that because you can bring all your register colouring machinery to bear on the problem.
I understand your point but the beginning of the zero page is almost always used as virtual registers by regular hand-rolled 6502 applications. So it's pretty normal for LLVM to do the same, it's not an example of LLVM doing something weird.
Not really that wonder, other that 6502 sucks for C.
"An overview of the PL.8 compiler", circa 1976
Does it end that code with a
jmp $fbfd ?Doubtful, given the JSR comes from an inline asm. You’d need to code the call in C (with an appropriate calling convention, which I don’t know if this port defines) for Clang to be able to optimize a tail-position call into a jump—which it is capable of doing, generally speaking.
No. The compiler knows that trick for its own code :) not sure about introspecting into the assembly (I think LLVM doesn't do that). But either way, standard C returns int of 0 from main on success. So: ldx #0 txa rts
Nitpick: this isn’t standard C (it uses void main, not int main)
Nitpick 2: why ldx #0 txa rts? I would think lda #0 rts is shorter and faster
Back to my question: if it can’t, the claim “an assembler programmer couldn't do better” isn’t correct.
I think an assembler programmer for the 6502 would consider doing a jmp at the end, even if it makes the function return an incorrect, possibly even unpredictable value. If that value isn’t used, why spend time setting it?
A assembly programmer also would:
- check whether the routine at 0xFBFD accidentally guarantees to set A to zero, or returns with the X or Y register set to zero, and shamelessly exploit that.
- check whether the code at 0xFBFD preserves the value of the accumulator (unlikely, I would guess, but if it does, the two consecutive ‘l’s need only one LDA#)
- consider replacing the code to output the space inside “hello world” by a call to FBF4 (move cursor right). That has the same effect if there already is a space there when the code is called.
- call 0xFBF0 to output a printable character, not 0xFBFD (reading https://6502disassembly.com/a2-rom/APPLE2.ROM.html, I notice that is faster for letters and punctuation)
On a 6502, that’s how you get your code fit into memory and make it faster. To write good code for a 6502, you can’t be concerned about calling conventions or having individually testable functions.
I bet that sizeof(int)==2 - which immediately tells you everything you need to know - and the return value from a function has 8 bits in X and 8 bits in A. So ldx#0:txa is how you load a return value of (int)0.
Regarding this specific unrolled loop, I would expect a 6502 programmer would just write the obvious loop, because they're clearly optimizing for space rather than speed when calling the ROM routine. They'll be content with the string printing taking about as long as it takes, which clearly isn't too long, as they wouldn't have done it that way otherwise. And the loop "overhead" won't be meaningful. (Looks like it'll be something like 7 cycles per character? I'm not familiar with the Apple II. Looks like $fbfd preserves X though.)
We did a lot of loop unrolling and self modifying code back in the day, when making demos for the C64. The branch is really expensive. For example, clearing the screen you might use 16 STA adr,x and then add 16 to X before you branch to the loop.
Indeed, in some cases you want the unrolls. The 6502 is good in the twisties, but if you're trying to do any kind of copy or fill then the percentage of meaningful cycles is disappointingly low, and the unroll may be necessary. Also, if you're trying to keep in sync with some other piece of hardware, then just doing it step at a time can be much easier.
I have done a lot of all of this sort of code and I am quite familiar with the 6502 tradeoffs. But for printing 15 chars by calling a ROM routine, I stand by my comments.
Yes, I compiled with -O3 for maximum speed. That would be an unusual flag choice in most cases.
I just wanted to use 6502 code (so many seem to be able to read it!) with C side by side. x86 would have worked as well. Where the fastest answer would also be the same construct, assuming the dependency on an external routine.
> Nitpick: this isn’t standard C (it uses void main, not int main)
You know what, I'm gonna nitpick that nitpick: void main() is fully allowed on a freestanding target, which is still standard C.
Given the C standards historically generous interpretation of undefined behaviour and other miscellany, I think it's a reasonable interpretation of the standard to pretend that a target that allows something other than int main(...) is freestanding rather than hosted, and therefore fully conforming.
Yep, llvm-mos-sdk is explicitly freestanding; the libc functions in the SDK follow the hosted C standard, but they don't add up to a hosted implementation. The only known C99 non-compliance is the lack of floating point support, which is actively being worked on.
Well... I mean... if you want inverse video.
There’s nothing stopping anyone from going back and using all of that old software exclusively.
For some reason everyone prefers the newer software, though. Perhaps there’s more to it than binary size?
Binary size on a desktop OS is almost totally irrelevant in practice. Memory size matters a little more, but your OS will generally do a good job of loading what it needs (ie, huge binaries can still start quickly) and paging out what it doesn't.
People have aesthetic complaints about "bloat", but again this is orthogonal to the actual speed of anything.
Well, the bloat has made many programs slower than they could be. Software is eating up the advances we get in hardware. Modern Word 365 is not any faster than Word 95 on a Pentium 66 in normal use. That is a ~100MHz computer with maybe 16MB RAM, and a rotating hard drive.
Bloat making software bigger will in many cases also make it slower.
Also, the UX on Windows 95 was consistent and easy to learn. Now, much software fail on stuff like disabling a button when you have clicked it and the computer is working.
MacOS is on a steady curve to the bottom. It is not alone.
The software bloat and decreasing quality is a serious issue.
I see this sentiment again and again, but the Windows 95 experience I remember included frequent spinning hourglasses, blue screens of death from faulty drivers, everything grinding to a halt when memory is near exhaustion or files are being copied between disks, tons of third party applications (and even Microsoft applications, like Office) that disregarded the Windows UI standards, constantly having to run CHKDSK and defrag, not to mention malware/virus vulnerability...
Latency when the system is under low load was definitely better, although a big contributor to that is changes in input and display hardware. But otherwise I'd much rather have today's "bloated" experience over the real world of the '90s.
Thing is, our computers are magnitudes of orders faster than 20-30 years ago. Why isn't our software orders of magnitudes faster? I'll settle for just 10 times faster.
If you take software from 20 years ago, and run it on modern hardware, it will be instant in most operations.
Clock speeds certainly aren't orders of magnitude faster than they were 20 years ago. The vast majority of the performance improvements of the past two decades have gone into improved capability, stability, and security. I don't consider streaming 4K video to be "bloat" personally.
Let's not forget the reinstalls to have the system perform properly again. I must have installed Win94 dozens of times over the years. I've never fresh-installed OS X once. I frankly don't even know how to do that with the current version.
We need Stevesie to reincarnate and fire whoever is Scott-Forestall-ing it up this year.
I still think iOS was more fun in the Forstall-skeuomorphic era, and screen elements were easier to differentiate.
macOS icons also used to have distinct silhouettes which made them easier to distinguish, but now everything is a square tile. Screen controls which used to be visible and targetable are now hidden and are harder to hit when they do appear.
It feels like we are still under the tyranny of the (Jony Ivian?) streamlined aesthetic over usability and functionality, as if a library decided to organize books by size and color.
"What hardware giveth, software taketh away."
In the case of Apple, it's often Apple software eating up the benefits of Apple hardware.
Unfortunately subtle differences (such as improved reliability/security or a streamlined workflow) are lost in the computing market, where people are attracted to the new and shiny rather than the old and usable. Also designers like to mess with things.
Word 95 was utter bloatware compared to 2.0c though
> Memory size matters a little more, but your OS will generally do a good job of loading what it needs (ie, huge binaries can still start quickly) and paging out what it doesn't.
And yet Electron apps often garnish that memory bloat and computational inefficiency with sluggish performance and a clunky user experience.
It's a shame when an 8GB Mac mini doesn't have enough RAM to run apps comfortably. Of course there's a bit of a corrupt bargain going on between bloated software and Apple since the latter wants to upsell you to a more expensive model.
Binary size, yes, since that's just sequentially reading bytes from an SSD. What sucks about modern software is input latency and overall responsiveness.
I would love to, if things actually worked on it! Since everything is HTTPS now and you need TLS1.3 for many things, running very retro things for daily usage is next to impossible.
> very retro things
I wouldn't consider an iPad Mini 1st generation to be very retro, but I still need to run a MITM proxy for it to be able to browse Wikipedia(!).
> There’s nothing stopping anyone from going back and using all of that old software exclusively.
Monthly bills are stopping me. Can I use Apple's MPW C compiler to build for iOS?
I don't know what boxes need to be ticked, but tcc is around 200kb and supports ARM.
> There’s nothing stopping anyone from going back and using all of that old software exclusively.
You need to search for it on old ftp sites.
> For some reason everyone prefers the newer software, though. Perhaps there’s more to it than binary size?
The compilers have "evolved". Compiling old code is challenging, to say the least. I do compile old programs when the new seem to explode: xpdf, xsnow.
Compiling old compilers is impossible because they rely on ancient kernel headers.
The clang executable on my machine is 18kb.
On my system, where clang is statically linked, the binary is 48mb.
As in megabytes.
Nearly all of clang and LLVM are linked as libraries.
I bet it forks some other executables, though...
> picture of a cat
I couldn't find this elusive picture of a cat on archive.org, but I found this dog instead:
https://web.archive.org/web/20030113144310/https://www.amazo...
June 2016 appears to be when Amazon adopted the current error pages with the large dog images.
https://web.archive.org/web/20160612232820/http://www.amazon...
Maybe this is it?
http://telcontar.net/Screenshots/worldwidewonk/Amazon-404-eh...
Wow, now that is a screenshot full of nostalgia... Nice find.
Thanks to both of you for this heartwarming bit. How do you go about finding something like that?
Quite manually, actually. I ran an image search on DDG for "Amazon 404 cat" and looked through the results for a kid's drawing.
I really like the look of that browser, which is it? Netscape?
That’s the one!
I misremembered the dog, it looks like they first replaced it with a ?⃝ icon. A later snapshot has the image name[0] but not the image[1].
mastry found a screenshot of the image in his sibling reply.
0 - http://g-ecx.images-amazon.com/images/G/01/x-locale/common/k...
1 - https://web.archive.org/web/20071030172825/http://www.amazon...
Go load www.amazon.com and check the source right after </html> :)
That's wonderful to see. Also interesting to see the site performance comments around different components.
Another anecdote: somewhere in that time period we (Prime) were using comments for various metadata and pre-release and post-deployment checks would verify the comments were in place. The team responsible for the edge proxies decided they were going to rewrite all the HTML going out on the fly to remove extraneous comments and whitespace in an effort to reduce page-size.
In the middle of testing a release all of the tests related to a particular feature started failing and (I believe) different devs were getting different HTML on their systems (the feature wasn't rolled out to every session). Our QA team was extremely pedantic in the best way possible and wouldn't allow the release to continue until testing could complete, so we had to track down the responsible parties and get them to dial down their transformation. They eventually tried again without stripping comments, but I can't imagine much was saved after compression without much of anything removed (they might have been rewriting other tags as well).
Maybe the daughter sued for copyright infringement when she turned 18. ;)
Serious question: Is this possible when a guardian gave consent earlier?
I know you’re joking, but she wasn’t 18 yet before they changed the picture.
AFAICT, kids own the copyright to things they create[0], but guardians are responsible and can use it on the child’s interest. IANAL, consult an attorney, etc., etc.
0 - https://www.copyright.gov/help/faq/faq-who.html#:~:text=Can%....
Where does this playfulness persist? I miss it too.
On the other hand, I despise the mock playfulness exhibited by, eg, Slack. It’s such a frustrating environment for me - I feel like it’s not-so-subtle attempts at promoting engagement over presenting the content fill me with rage. I want my tools to fade into the background.
The punishments continued until the playfulness went away.
HTTP status code 418?
Haha thanks for this. There are some funny gnu error codes: https://www.gnu.org/software/libc/manual/html_node/Error-Cod...
Including EDIED and EIEIO
IIRC people actually tried to get rid of that. Thankfully it failed.
> I miss this kind of playfulness in computing.
it's still right here every day when Firefox says "gah this tab crashed".
> "Symbol table full - fatal heap error; please go buy a RAM upgrade from your local Apple dealer"
Ah, the old times when one could purchase a RAM upgrade or upgrade RAM after buying a computer. Now this would be:
"Symbol table full - fatal heap error; please go buy a new Mac with more RAM"
Not really. Classic Mac OS didn't support virtual memory so everything had to fit in RAM unless a program itself offloaded data it's not currently using to the disk. Modern OSes, however, all support swapping. Your compilation would continue, just much slower. To truly "run out of memory" on a modern computer, you have to fill up both the RAM and the disk.
Which version?
Regarding "everything had to fit in RAM": prior to real virtual memory, the Macintosh Resource Manager was capable of loading and unloading resources on the fly. Resources marked purgeable could be discarded when memory was needed. Code segments (another type of resource) could be loaded by automatically by the Segment Manager, but as you said would not unload until the application requested it or exited. INITs (system extensions) unloaded all code after initialisation by default (requiring extra steps to keep anything in RAM).
Virtual memory was built-in by System 7 (and I think available on supported hardware via 3rd party utilities earlier?).
Unloading code and resources is easy — you already have them on the disk. Unloading runtime state though, that's the hard part. I've never used classic Mac OS when it was current, much less haven't developed for it, so no idea how often, if at all, apps used temporary files to work on more data than can fit in memory.
Ok, fair. Mac resources, including CODE resources, could be modified by the running program and unless marked as read-only were saved to disk by the relevant Manager before unloading. Photoshop was noted for its unusual use of disk, which was said to work like a virtual memory system.
Due to lack of protected memory, a few programs, especially on the earliest machines, with the least memory, 'abused' the display buffer by using it as RAM. Of course, this corrupted the screen, but it would all just be drawn again as soon as the program yielded time to the OS. Back then, only one user program could be running at a time, and it mostly had the screen to itself, so why not use it to copy floppies with less disc swapping?
I understand the arguments for unified RAM on a SoC, but it’s still a shame; even the new Mac Pro doesn’t have RAM slots.
The soldered-in SSD is worse, though. The SSD WILL wear out, so then you get to throw away your Mac?
But how quickly though? Do we know the write endurance of the Mac SSDs?
I'm not a Mac fanboy by any means. But SSD write endurance has become ridiculous. Even on a cheap-ish read-intensive server-class 1 TB NVMe SSD (~$300) you get around 1 petabyte total write endurance, meaning you can write essentially the entire contents of the disk every day for 5 years and still be within the warranty. That is orders of magnitude beyond what any consumer is going to subject their disk to.
There is always a possibility that a part of the computer will fail. But if the SSD is less likely to fail than any other random IC on the motherboard, having it soldered on doesn't factor significantly into the failure statistics.
Of course it's super annoying that you can't upgrade the disk size, but that's another point.
I've got two Macbook Pros - a 2014 15" (500 GB SSD) and a 2015 13" (1 TB SSD). Both are still going. I can't imagine newer Macs are worse. I'd be more concerned about replacing the battery!
The newer ones could be worse as they move to smaller process nodes and store more bits per cell. That said, the controller logic is improving, and larger capacities mean more room for wear leveling, so it shouldn't be too bad.
My MacBook Pro (2015, 15 inch) started having some SSD issues earlier this year after a bit more than seven years of heavy use, but it does seem to be partially mechanical because it would mostly happen after it had been in my bag.
Since I bought an M2 Pro to replace it, it hasn’t had the issue I think because I’m just leaving it at home and not flexing or squeezing it much. Perhaps the issue could be fixed permanently and properly with a bit of hot air to re-flow the solder balls.
> The SSD WILL wear out
This is by no means certain, certainly not enough to SHOUT about.
Modern SSDs have lifetimes that make them impractical to literally wear out, short of some kind of fault.
OK BUDDY YOUR WELL-CITED CLAIMS ARE NOTED.
Just like AirPods and AirPods Pro when their 7 cents worth of batteries die! A perfect system.
That will have to change in 2027 with he new EU battery regulation.
Do they? I thought it was only iPhones. Either way, Apple will find a way out...
No, it’s basically all portable consumer devices with batteries.
That's not even that bad if you have a hot air gun and a reball kit. What's worse is the flash chips having some kind of cryptographic identifier that locks them to the machine, so you couldn't replace the flash if you wanted to.
Is it accessible enough it can be soldered-out, or is this one of those effectively-permanent things?
The Mac Pro missing RAM slots was disappointing to me. Performance uber alles and all that, but upgradability has benefits as well. Until Apple started soldering RAM, I always did aftermarket RAM upgrades, and even recently doubled the RAM on a 12 year-old file server.
They could still allow adding more RAM that would just be slightly slower.
They would need to extend the slots from the existing ram, which would slow it down even when empty, or they would need to use up a lot of silicon adding more memory channels. It really does make sense to solder on RAM for best performance. Though alternative sockets like CAMM might help in the future.
But you can't do a trade-in, the RAM they sell is extremely overpriced, and as mentioned elsewhere there's no excuse for the SSD.
You can always use a PC if you want upgradeable RAM.
This is even more true today, because Apple Silicon Macs are able to store twice the amount of information in the same amount of memory, meaning that a paltry 8GB configuration can store 16GB of FizzBuzz boilerplate, 4 Google Chrome tabs, or 20% of the average node_modules.
I don't think that feature is exclusive to Apple Silicon, or Macs.
It's a joke (clearly not a very good one) based on my experience that when Apple first offered only 8GB of RAM on the first ASi Macs, everyone's response was that Apple Silicon can simply store more data in the same amount of memory or something. Back when I used macOS on Intel, 16GB was relatively comfortable, but I can't imagine that switching to ARM would magically halve memory requirements.
(Yes, the Intel Mac had memory compression as well. And my Windows 11 PC also has memory compression, but only if you also enable swap because Microsoft says "fuck you".)
The AS memory compression is better as it's hardware accelerated.
IIRC Windows doesn't like to overcommit memory and its allocation calls will actually fail instead, but I don't remember the details.
I think ASi Macs also use swap by default now, because the SSDs are so fast that it's actually a tenable solution to memory pressure. My Intel Mac never had any swap.
> IIRC Windows doesn't like to overcommit memory and its allocation calls will actually fail instead
Yeah... one of the reasons why I dislike it.
Intel Macs have swap too. You can check it in Activity Monitor: https://support.apple.com/en-gb/guide/activity-monitor/actmn...
zswap supports pretty much any cryptographic accelerator. Accelerating memory compression isn't really a new or particularly "better" innovation.
Not cryptographic acceleration, compression acceleration. Anyway, they ain't got one, so it doesn't matter if it's supported.
"a typedef name was a complete surprise to me at this point in your program"
Ah, the joys of fun compiler messages. I miss those days. I remember getting one from a vendor compiler that was: "No! But they'll only let me warn you. Danger Will Robinson! Danger!"
and: "Really! If you are fussing around with void *, just go home or at least back to your editor!"
I think the IT manager kept that as a vendor just because of the message (the SDK was meh, but also fun!).Not much of a C programmer, what's the context around void* being a big deal?
Void pointers refer to anything and nothing. They are everything and all encompassing. What is pointed to by the void pointer could be what you want or it could be another universe.
Dereferencing a void pointer has no meaning. The compiler can do anything it wants because it doesn't know how to interpret the memory. It could give you the correct thing, it could warp a civilization in from a distant planet, or it could open a world-ending black hole. All are equally probable.
You're correct, although I'm fairly sure dereferencing a `void*` is a compile error on all but the most ancient and non-conforming compilers. I'm not even sure what it _could_ compile to, given that `void` has no size.
void * was introduced after char * had been the pre-standard way of addressing any memory. Compilers of the era would let you use void * like char *, because it made it easier to change char * into void *.
Yes most modern C compilers will stop you from dereferencing a void pointer. Still, I couldn't help but bring up what can happen. Since I didn't witness it I can only assume 3 mile island was someone dereferencing a void pointer and it was easier to explain with a nuclear meltdown.
It really should be specified to specifically summon nasal demons, IMO.
Has any civilization been warped in already?
Not to my knowledge, but there seems to have been a slow warping out of civilization in recent years.
I was programming on MacOS (the original) since it was possible. I remember many of these error messages! Especially "Too many errors on one line (make fewer)".
...also remember 45 minute builds when a header file changed.
In those days I wrote exclusively system extensions, plug-ins and XCMDs, using a mix of 68k, C and Pascal. Each project was quite small, so compile time was never a problem and MPW was a paradise. My largest XCMD actually had bits in all 3 languages which MPW happily linked together, and some projects had various little blocks of code to stick in the same file, all of which could be automated easily.
I remember these error messages coming up and laughing out loud when I saw the rare ones. Nice work, whoever did it!
I used this compiler for years and eventually came to be able to “decompile” the 68k object code it produced back to C code in my head on the fly unless the function was too large. Using MacNosy I could rebuild the C source for an app in usually only a couple of hours. I had a script that converted a MacNosy file of an app into an assembler file and rsrc file and I could translate functions to C one at a time while having a buildable app equivalent to the original. I originally used the tools for hacking games but sometimes used it to fix bugs.
The MPW C compiler code generation was so predictable in part because of the symmetry of the 68k instruction set. They wrote a simple compiler and it worked. For the most part effort was spent elsewhere. Since you could reasonably predict what code would be generated if you were unhappy with the code generation you fixed the source. I like that the javac compiler has a similar ethos, With similar effect. Once you know the patterns to use you can generate fairly close to optimal byte code.
My favourite syntax error message produced by the Glockenspiel C++ compiler (a cfront derived piece of junk that I used in a training company in the early 90s) was simply "core dumped". This was slightly tricky to explain to people already struggling with C++, and who had paid us money for the course.
The users could simply run a debugger to get a backtrace from the core file... then with some experience, they would learn to associate different hex addresses with different kinds of errors. No harm, no foul.
I take it that you have never worked for a training company :-)
"Call me paranoid but finding '/*' inside this comment makes me suspicious"
That, Sir, is none of your business.
...I kind of wish compilers supported nested block comments. So if there's a /* inside of a /*, it would take two */'s to end it.
Idk, maybe that would be a terrible idea in practice. But there are lots of instances where it would have saved me time.
You're probably looking for "#if 0" / "#endif".
Yeah it took me a while to adopt this common practice when I need to "comment out" a large block of code. Just use the preprocessor, it's much simpler.
FWIW, replacing comments with whitespace is also done by the preprocessor.
The fact that you can't do this with new fangled languages is one of the reasons I don't use new fangled languages
That and other things I dislike about many of the new fangled languages.
I prefer using completely unfangled languages, thank you very much.
the D Language supports /+ and +/ as a variant of /* which supports nesting. So you can pick which you need for a given comment.
Not terrible at all.
It's super-useful to temporarily comment out a bit of code, and then to comment out a larger block surrounding it. Especially when debugging.
Sadly I've never used a language that supported that.
Common Lisp has nested comments with the #| reader macro.
How about explicit depth levels, specified by asterisk count?
e.g. '/*' and '*/' would match each other, '/**' and '**/' would match, and so on.
That way, you would have full control of the depth of the comments, removing other comments wouldn't break the inner comments, etc.
I do run into the same issue you're describing, so I think there's value in the idea.
The core idea here is that when I'm commenting out a block of code for testing, I don't really want to think about what is inside of it. So while I think this might help, I'd rather just have /* with one asterisk nest.
What I'm not sure of is whether there's an edge case I haven't thought of which would make this problematic.
Like Lua, where comments can be delimited by `--[==[`..`]==]`, where the number of equals signs can be anything but has to match in order for the comment to actually close?
FYI, Rust and many other modern languages do this.
Already an option on Borland compilers for MS-DOS.
I believe OCaml does this
Tangential to the content of the page: I really enjoyed how many MPW utilities generated output, including error messages, in the form of commands. Your terminal was an editor buffer, so you could cursor up (or click) on the appropriate line then press something like cmd-enter to pull up the file in question (among other things).
I think it was just the enter key to execute the selected text.
You're probably correct. I forgot that Apple labelled the enter key as return on the alphanumeric part of the keyboard and as enter on the numeric keypad. I recall some software (possibly MPW) treating cmd-return as enter. Or something to that effect. It has been about 20 years!
Yes I think you’re right. I was talking about the standalone enter key, or you could also do some modifier and return. It has been a long time
Hm. Sounds like Plan 9.
"a typedef name was a complete surprise to me at this point in your program"
I've seen this list so many times and this one makes me laugh out loud every single time.
Much better than gcc's "Redefinition of ..." or "Static declaration follows non-static"
Previous discussion: https://news.ycombinator.com/item?id=30238928
To inline my comment in the previous thread:
Just for some context, the MPW C compiler that produced those messages was actually not developed internally at Apple, but was rather done by Green Hills Software [1] under contract as mentioned on the wikipedia page [2] and its source [3] which is funnily enough about this exact same topic.
[1] https://en.m.wikipedia.org/wiki/Green_Hills_Software
[2] https://en.m.wikipedia.org/wiki/Macintosh_Programmer%27s_Wor...
[3] https://web.archive.org/web/20140528005901/http://lists.appl...
My favorite error message was produced by the Univac Fortran V compiler, circa 1970: “Warning: floating point equality tests are nugatory.” I pride myself on my vocabulary, but I had to use the dictionary.
I recall from 1965 getting an error message message something like this from the Fortran complier on a Univac 1107 system after receiving too many error messages: "Do not attempt to learn Fortran using Monte Carlo method. Buy a manual in the user office."
The old Clipper 5 compiler had some fun error messages. The two that I remember running into were “Ford Maverick Error”, and my personal favorite, “Carnage! Module name crushed in compilation disaster!”. I ran across both abusing its preprocessor.
I learned to program on various dBase languages. That Clipper 5 preprocessor was quite the thing! It reminds me of https://research.swtch.com/shmacro but it met a real need, lowering a COBOL-like syntax to a C/Pascal-like one.
(dBase code looks like https://github.com/harbour/core/blob/master/tests/ntx.prg , and https://github.com/harbour/core/blob/master/include/std.ch is an open-source reimplementation of Clipper's preprocessor definitions).
I’m so curious, what could a Ford Maverick error possibly signify?
I’m honestly not sure. I met a couple of the developers at a convention a few years later, and found out the carnage one was where a stack of tokens from the tokenizer was unexpectedly deleted, but we didn’t talk about the other one. I suspect someone probably owned a Maverick that was buggy tho, or didn’t run.
That whole subdirectory is full of some old school internet humour.
alright, these are hilarious. i miss this playful attitude in modern-day software engineering, we need more of it.
also, the note on the copyright is hilarious.
> "...And the lord said, 'lo, there shall only be case or default labels inside a switch statement'"
Did it seriously not let you have a goto label inside a switch?! This seems like an odd restriction, as all 3 are the same kind of thing.
They're different enough that you can't goto a case or default label.
From that perspective, indeed, very true - but then that's exactly why you need to be able to have goto labels inside a switch!
I really loved MPW Shell.
Been able to have a worksheet with random shell commands I’d built up, triple-clicking a line and hitting enter to run the selection.
It was quite a thing.
I had NFI what MPW was until I read this: https://en.wikipedia.org/wiki/Macintosh_Programmer%27s_Works...
MPW had an interesting About box animation too. I recorded this back in 2011 using vMac emulator. https://www.youtube.com/watch?v=aJn3qxK9Br0
there were also MetroWerks bindings for MPW, long ago, but Apple killed it with fire