Apple Dylan IDE (2014)
web.archive.orgWhat I like the most is the abstraction-away from plaintext source files. Imagine a C and C++ IDE that hid the (often ugly) source files from you put only exposed individual function definitions - it could automatically keep header files in-sync, for example, and automatically place each free-function or class member in the right file without manual refactoring.
(I still think it's outrageous that C is still a single-pass language - we shouldn't need separate simultaneous declaration and definitions any more)
This is what Lighttable aimed to do - create an extensible, abstracted code editor for many languages. See also CodeBubbles (Java) and most Smalltalk environments.
http://lighttable.com/2012/04/12/light-table-a-new-ide-conce...
> It’s no secret that I really like Clojure and as a lisp, it was the easiest language for me to start the prototype with, but there’s no reason this couldn’t be done for any language with a dynamic runtime. The rest is mostly simple analysis of an AST and some clever inference.
I have looked into this. It is kind of criminal that for most real world languages (Ruby[1], C[2] etc), it's not possible to just define a grammar and throw it at a standard parser generator for them - they generally have one or two quirks which make this infeasible.
In my ideal alternate universe it would be considered unthinkable to publish a language without also publishing a grammar in a standard format for said language, which can then be plugged into your favourite text/semantic/tree editor. Our tools should dictate our languages, not the other way around.
[1] http://programmingisterrible.com/post/42432568185/how-to-par...
Most programming languages are context-sensitive [1] (at least with unbounded nesting), so parsing them correctly and efficiently is mathematically impossible. All practical implementations have to take shortcuts.
[1] Mainly due to begin..end blocks, curly braces or indentation (as in Python)
Are most programming languages really context-sensitive? or aren't they mostly context-free?
My days of fiddling with writing parsers are long ago (https://www.codeproject.com/Articles/7035/A-Java-Language-ID...) but if I remember correctly most languages aim for at most a LL(2) grammar, meaning they are designed so the parser doesn't have to peek more than two tokens ahead before being able to make a correct determination.
C has some fun ones:
a * b;
Is either: a times b if a is a var, or declare a varible b with type a. If a is a typedef.
Also:
some_type b = {a, b, c, d};
Is only valid if some_type is an array or a struct. Which is possibly defined elsewhere in the source.
(I tend to see this syntax in some code bases (some_type) { a, b, c, d }, which is a bit better).
Context free grammars are perfectly capable of expressing matched curly braces, even with unbounded nesting. Am I missing something?
Yes, thinking about it that alone is not sufficient. Still, I'd claim that most languages are not context free.
CPP (Pre-processor) aside, C is not context free due to typedef making identifiers ambiguous. Also if-then-else? Since C++ templates are turing-complete, the grammar is probably unrestricted.
Python is not context free due to
stmt3 and stmt1 have to share the same level of indentation to form a valid Python program, but they might contain arbitrary indentation within brackets.if ...: stmt1 if ...: stmt2 stmt3
The RealBASIC IDE also give you a function-based editing experience instead of storing entire text files full of code.
It was kind of neat, but also led to a lot of clicking around. It's one of those Holy Grail ideas people have been talking about forever, but I'm not convinced it's actually that superior given all of the ecosystem downsides there are to moving away from text.
I think you probably could do something better than text files, but it has to be a lot better to get over the chasm of losing all of your familiar editors, command line text utilities like grep, easy copy/paste, etc.
Text is a lowest common denominator medium. People get hung up on the "lowest" part, but the "common" part is pretty damn convenient.
And Self's Morphic, which went even further ("static" objects would be defined via UI elements with only the method bodies being written in small editors)
(I still think it's outrageous that C is still a single-pass language - we shouldn't need separate simultaneous declaration and definitions any more)
Maybe not, but I have a hard time to believe that it can be considered even a slight annoyance to anyone but people just learning how to program. The same people that are annoyed by it probably benefits from it anyway, forces them to actually think about what they are doing.
Also makes it easier to get an overview of the code using just a text editor. KISS.
>Maybe not, but I have a hard time to believe that it can be considered even a slight annoyance to anyone but people just learning how to program.
Slight annoyances pile up. Ergonomics matter even for seasoned pros. Just because they have learned to ignore the garbage, doesn't mean their room is clean.
True, but this truly is a non-issue.
Seems like status quo bias. If C did not have this wart, no one would suggest adding it just to "force beginners to think about what they are doing" or "make it easier to get an overview of the code using a text editor".
And human factors matter. In the context of language design, they're the only things that matter - the entire function of a programming language is to make a complicated task fit a human brain as neatly as possible.
no one would suggest adding it
Disagree.
Visual Basic / Excel VBA does that, and I thought it was the way to go until 18yo. But when you can only see one function/event handler at a time, you lose the understanding of how programming works. If only I had been exposed to plain-text programming (Maven + .java) instead of using Windows/visual editors for years...
Old-school VB actually shows you all your code (sorry, best video I could find quickly: https://youtu.be/zmyZCmX2LWQ?t=2m21s). They were aware of the issue you raised, having tried separate-function editing as far back as QuickBasic for DOS.
The usability engineering of old-school VB (of which VBA is the modern representative) was frankly top-notch, and I feel no shame following their footsteps in the https://anvil.works code editor.
Unfortunately, preprocessing would make this difficult or impossible. Even with modern C++, people still rely on it.
You are thinking of a structured editor.
But as to the single feature you mentioned, see for example Code Bubbles:
Extremely relevant, particularly his remarks: https://youtu.be/8pTEmbeENF4?t=1174 (19:34 if t= doesn't work).
I think I would love this.
Macintosh Common Lisp is another one. It was so much fun to use and worked so well with MacOS. Echoing etchalon's comment, sometimes I miss OS 8 (also 9).
Note that the Apple Dylan IDE is written mostly in Macintosh Common Lisp. Only its Interface Builder was written in Dylan, IIRC.
MCL had its own interface builder. Didn't Dylan just use MCL's?
No, it had its own - at least in the 'released' technology preview. The Dylan Interface Builder was written in Dylan itself and was loaded directly into the running Dylan application. The Apple Dylan IDE itself was running as a separate MCL application. You can see from the screenshots that the IB also had a different look&feel.
Command-E for Execute!
It's always interesting to look back on those days at Apple when they were so innovative and took so many risks when it came to software. Technologies like OpenDoc, Cyberdog, Hypercard, AppleScript, Taligent were really quite unique.
You clearly never used Taligent and OpenDoc. They were innovation by committee and completely revolting. You can read the introductory Taligent tutorial here:
https://root.cern.ch/TaligentDocs/TaligentOnline/DocumentRoo...
As for Hypercard... it wasn't Apple's innovation but Bill Atkinson's (he designed and wrote the whole thing). Once Bill left (6 months later), no one else was really able to manage the codebase and it rotted for a decade or two until it was finally cancelled.
I've really been enjoying reading Brent Simmons blog about his efforts to get Frontier running on modern macOS:
http://inessential.com/frontierdiary
I suppose that Apple Dylan was similar to Frontier in the sense that they were programming environments built around an object databases. Frontier was a shipping product though!
Back when they actually had an R&D department.
Hypercard should not be on that list, it was succeful despite official neglect.
https://discuss.atom.io/t/the-deuce-editor-architecture/2218 goes into a bit of detail regarding the editor, deuce. Note, you can download and play with the IDE, and read the source code, it is part of OpenDylan distribution, but sadly only works on windows right now https://opendylan.org/
I wrote that on the Atom forums ... that's the editor in Open Dylan, which used to be Harlequin Dylan (and was Functional Developer after Harlequin folded and before being open sourced).
I had parts of Deuce up and running as a terminal-based editor at some point. Well, I didn't do input which is clearly a very important thing ... but I'd made good progress on the output side of things. :)
It's funny because this is basically the VBA editor. You have a tree on the left with classes and modules, then in the main pane you have a drop down at the top to select functions and the text editor (if configured that way) will show a single function.
I wonder when Microsoft will do any work on the VBA editor. It's not like VBA is going away. Office users still write new VBA every day. They need it.
It clear Microsoft intends to replace VBA with JavaScript that will run both in desktop Office and Office Online - we have it already with "Office Apps", but Office Apps are sandboxed pretty bad and have zero access to COM and legacy Office components. Assuming Microsoft eventually brings JS-in-Office to feature-parity with VBA then they can kill off the old editor.
It will take many years before they can do that. First because they haven't provided an alternative yet (javascript is used to create addins, not for users to create scripts or make new functions available). Second because you have millions of business processes that rely on VBA. So as far as I can tell the transition hasn't even started.
In case people don't know, this is often referred to as a "projectional editor" and the paradigm is also known as "Intentional Programming" in the sense that the programming environment helps capture the intent of the authors.
Popularised (if we can call it popular!) by Charles Simonyi of Microsoft's fame who created the company called Intentional Software that was recently purchased by Microsoft.
There was interesting editor called Isomorf that demonstrates the benefits of a non-text-based editor.
(site is down https://isomorf.io/) (youtube demo https://www.youtube.com/watch?v=awDVuZQQWqQ)
I would really like to see something like this take off.
I firmly believe we can only unlock the next generation of software engineering by breaking free from plaintext. Think about it, how many more ASCII symbols can we mangle together to create meaning and context?
A structural editor takes all of that away. Suddenly syntax becomes a choice just like the colour theme of your editor.
Plaintext programming puts us into a fight with the computers because on one hand we need to keep the syntax parsable and on one hand humans need to read and write it.
It's a huge conflict of interest. You want to provide information to the compiler now the syntax becomes hard and complicated (rough example: Java). You want to keep the syntax human-friendly now the program becomes weak from the compiler's point of view (rough example: Python).
Our editors need to be context aware so they can hide/show relevant information and to encourage the people to provide as much information about the context/domain as possible.
If you look around you see we have been doing a lot of this stuff in the past decades but for some reason we just half-ass it by baking stuff on top of plaintext.
For example embedding documentation or even unit tests (python "doctests") in comment blocks in ad-hoc languages.
Or we embed naming conventions and so on to relate concepts with each other.
For example a "User.js" file and "User.spec.js" file for a test.
If we kept information in a structured manner suddenly so many of our problems would go away.
For example we will get structured version control. No need to have something like git tracking lines in files.
We will get unit testing that is always correctly tied to its relevant components.
We will get documentation that is structurally accurate. The editor could switch between programming and "documentation" mode. But the documentation would be a first-class object of the program not just some text that is shoved into it somewhere.
We will get much smarter re-factoring.
We will get much better compatibility across versions. Because there's no syntax to worry about breaking from a textual perspective. Because the program becomes a semantic tree and older programs can be "transformed" to fix them or make them compatible or something similar.
Because we are text-free the environment can encourage the programmer to provide a lot more information because it can get folded/hidden/etc.
The "units" will all have unique identifiers so confusion in naming and so on will be significantly reduced.
Perhaps you could create and publish modules/units in some central repository then use them in your projects. Kind of like NPM for example but a lot more structured.
So you could import a bunch of "units"/functions from someone else's catalogue.
Because everything could have metadata attached to it you could imagine for example "security advisories" could be attached to certain units such as a function and published.
The environment would know exactly in which places you are calling that exact function and it could alert you to the fact.
You could do semantic find and replace ("show me all sql queries", "show me all untested functions", "show me all functions modified by John Smith since last 14 days", "show me all undocumented functions", etc...).
You could do smarter CI/CD by way of defining rules and constraints on the structure of the program.
Made-up Examples: - If the changeset involves objects tagged with "security" require approval before deploy - If the changeset introduces new SQL queries ping the DBA team - If the changeset introduces more than 1 function without corresponding documentation show warning - If more than 50% of the new objects introduced in the changeset lack corresponding test cases fail the build - You get the idea..
The point is, all the cool stuff we'd like to do depends on us having a lot more structured information and context about our programs and a plaintext environment is not suitable and is hostile towards that.
It's not 'Intentional Programming', just because it uses a source object store and some browsers too.
"Intentional Programming" doesn't have a very rigid definition.
The most defining element in it is the projectional editing of a structure.
That's why I said "the paradigm".
Why should it be 'projectional editing'? All you see in Apple Dylan is a bunch of browsers/editors, conceptually similar to what a Smalltalk or Interlisp IDE did, but with a different UI.
As you can see in the screen shots, it presented Dylan source code, but through a bunch of browsers, folding editors and navigation tools.
There is no 'intent' captured.
I'm not intimately familiar with Apple Dylan but at least based on the information on that page it very much reminded my of intentional software.
"This also illustrates a key feature of the Apple Dylan TR: every part of your source code was not just text in a text file, it was a separate object in an object database. This gave the IDE incredible power, because meta-data about each object could be maintained to facilitate browsing via a variety of relationships and views, but less-than-optimal implementation may have been responsible for some of the performance problems of the Technology Release."
Having the program as an object stored in database with metadata and editing/browsing it with an editor is intentional software as far as I'm concerned (Simonyi's definition has nuances and extra constraints I know).
It's not like there are so many of these around that we have to categorise and differentiate them anyway. They are very few of them around so for now as far as I'm concerned they all go in the same bucket of "things that attempt to break away from plaintext" in my view.
Having the program as an object stored in database with metadata and editing/browsing it with an editor is intentional software as far as I'm concerned
That would make any VCS- and annotation-aware IDE an intentional-programming tool. I don't think that's really the case and it wasn't for the Dylan environment either.
Hello! I’m Aaron one of the co-founders of isomorƒ. Sorry our site was down when you visited. We’ve actually just pushed a new demo (https://isomorf.io/#!/demos) and we would love feedback on it (feedback@isomorf.com)! We are planning a public beta soon and we’re looking for participants (https://isomorf.io/#!/sign-up).
Your descriptions of projectional editing and intentional programming very much resonate with the vision we are pursuing. We agree that structured editing can create much more efficient paradigms for refactoring, reuse, and stability. Why do we search for code based on text rather than real function signatures and even input/output examples? Why do we need to change languages just to use new control structures? Why do we accept the coarse granularity of file level commits? Why are things like parallelization/caching part of code rather than ex-post runtime configuration? Why does deployment need to involve endless command line idiosyncrasies
This is definitely a complex problem, but we feel a paradigm shift is necessary to make a step change in software development efficiency. We’d love to discuss further.
Isomorf seems like a great way to write Isomorf code in many languages, in much the same way that you can "translate" Lisps to any language by writing a tiny interpreter. If you can push one button and get either Haskell or Javascript, either weird Haskell or weird Javascript is likely coming out.
Further, watching that video I was reminded of using the Equation Editor in Word long long ago. Frightfully un-ergonomic pixel-fiddling ("no, I meant put the insertion point INSIDE that expression!") compared to entering the same formulas boring-ASCII-style in LaTeX.
If there's "no syntax", what exactly is this showing diffs of? ASTs? "We will get" is handwaving a lot of R&D here, even before you get to "how can Programmer A communicate about a diff to Programmer B when they read the code with different syntax?"For example we will get structured version control. No need to have something like git tracking lines in files.I don't see a huge difference between things like "tests embedded in metadata" and "tests embedded in comments"; they're both blocks of associated bytes that tooling is responsible for interpreting. Many of the examples you're describing are entirely possible with things like static analysis, annotations, etc. The difference is that in order to get them we didn't have to throw away EVERY tool we were using and start over. Bootstrapping a new non-text environment would be a substantial effort, and doing it without ending up in the same spot as M-expressions (where an intended for-machines-only representation displaced the more-complex planned notation) would be even harder.
I'd recommend checking out some of Joe Armstrong's stuff - he's been pondering the "global registry of functions with unique name" thing for a couple years.
With regards to the "diff" which I forgot to address earlier.
The changeset still has a textual representation.
So the diff could be like a webpage that is showing:
On 5 June 2017 John Smith authored the following changeset:
- Added new function that calculates sum of given numbers (click to view)
- Removed function called "old calculate sum"
The program is not the syntax the syntax is just a representation of it.
So if I have added a new function we can both look at the diff.
We will both see that I have added a function.
We could both choose to view the body of the function in the same syntax, or you could configure your editor to show it in a different way. In the same way that you could configure the font size or the colour scheme.
The point is to reduce the importance of things that are not inherently part of the domain and give that importance to the problem/domain itself.
So you could view a "number node" like "5,000,000" and I could view it like "5_000_000".
I could view a function definition like "declare function named blah" you could view it as "def fun blah".
Because we are still looking at the exact same thing it makes no difference in the result of the program.
It will be something close to an AST.
"syntax" is the textual representation of an underlying structure.
When all of our tooling revolves our modifying text/syntax not the underlying structure we make our lives very difficult and create new "artificial" problems with regards to parsing/extracting and so on.
A sufficiently good editor could make the experience even better than a normal text editor.
The crucial difference is that there's no such concept as "lines" or "files". But it could visually almost represent it to you in that way.
So you would type "let x = 1;" instead of it saving those characters in some file it would modify the program/tree and add a node that represents the fact that "assign 1 to name x".
As I also stated I know many of the things I have mentioned are already kind-of possible with the tooling that we have but they are bolted on in such a way that is disconnected from other components so we lack a more global view into a system as a whole.
This creates little isolated/disconnected worlds within a system and significantly reduces the potential and benefits of things we could do.
It also reduces accuracy to a level that you always have to have your guards up and can't fully trust the system. It turns into a helpful heuristic but not a first-class part of the system.
It is one thing to be able to "parse all of this crap and extract these blocks of characters that we think are the documentation" vs having strong accurate links established so that the entire system is capable of querying navigating metadata associated to objects.
Tests and benchmarks and documentation and all sorts of stuff can become first-class citizen objects in a programming language and the compiler and tooling gets exposed to that information it can suddenly start helping you a lot more than it is otherwise able to when it has no view into that stuff (i.e. compiler ignoring the comments).
Lots of things that are today very cumbersome and require sophisticated parsing and so on become very easy because you don't have to parse/compute stuff, information about the structure is readily available and queryable.
For example all your "logging" stuff could simply be tagged as "log".
Then when you are compiling you could say "take out all function calls tagged as log as if they didn't exist".
Again there's a huge but yet subtle difference between that and what we are capable of doing today.
We today we have ways of kind-of achieving the same thing by #ifdef compile constants and so on but one is inherently semantic and logically sound the other one is just arranging a sequence of side-effects to achieve a desired effect without exposing that insight to the compiler/environment itself.
That difference is the essence of intentional programming.
The system allows you to express and preserve your intent in its original semantic form that is the blueprint for the software that is produced from it.
If you are interested to learn more about this topic you can watch some of Charles Simonyi's presentations on YouTube on this topic, they are long but worth a listen.
https://www.youtube.com/results?search_query=charles+simonyi
Sounds great in theory.
The dirty reality is that text is the lowest common denominator, and as such, most reusable.
Speaking of Microsoft, I would rather manage flat config files than structured registry editors.
I know. That's why I think it's a very difficult but worthwhile problem to tackle.
We have stopped questioning certain things and take them for granted and those come with certain constraints and limit our progress.
(Such as the idea that any programming must involve editing text that is stored in a collection of files and folders)
I also think we under-estimate how pleasant a structured editor can be.
Most programmers associate those things with "toys" or think of them as not hardcore enough for their skills, some clunky drag-and-drop GUI editor that insults the mad skills of the precious programmer who has embedded vim/emacs bindings in his/her muscle memory.
It doesn't have to be that way. If it's flexible enough it can look almost like a plain text editor. But it would be one on steroids. It could do holy-grail stuff in terms of suggestions/intellisense/auto-complete and so on because of its rich understanding of the underlying structure.
Some days, I still miss OS 8.
What I miss the most is the extension system. Where installing something (driver, new feature, etc) on the OS is as simple as dropping a file in a folder. And uninstalling it deleting this file. And you know there is nothing left after you removed the extension.
What I don't miss is when the extensions conflict, crashing the machine on boot.
I also don't miss the process of removing each extension one by one, rebooting each time to find the culprit.
That's still way better than trying to figure out which driver is causing a BSOD on windows today.
Let's keep in mind this was a pre-protected memory OS.
What I don't miss is the single address space memory without any protection or the relatively primitive multitasking.
Me too.