TypeScript Without Transpilation (2021)
incrementalelm.comSorry for being that guy, but this should probably be called "typechecking without transpilation".
What ends up being written is not TypeScript syntax (which does require transpilation to run).
JSDoc is awesome for incremental adoption though, I've migrated a few codebases this way without too much pain.
Not TypeScript the language, but TypeScript the typechecker program. “Typescript without transpilation” is still perfectly accurate.
Vouched. This is the point of the thing - the typechecking benefits come from running `tsc` on your code (or VSCode will do it for you in the background and show you linter errors if you have typescript installed somewhere it knows about).
Incidentally I've been doing my JS projects this way for 2-3 years and I really dig it.
I started one side project like this recently and it's been great so far.
There's a few clunky bits but I hardly ever run into them.
Actually since this conversation is going, here's something I just learned this week that somebody might find helpful. I used to consider enums one of the clunky bits, but I just ran across the `keyof` operator:
Probably old hat to TS people, but since I came at it from the other direction (I'd been using JSDoc for traditional doc-creation reasons before I realized VSCode could use it for linting and type hints) it was news to me.var modeNames = { FOO: 1, BAR: 2 } /** @param {keyof modeNames} mode */ export function doThing(mode) { // the jsdoc type above is equivalent to {'FOO'|'BAR'} }That is indeed a neat trick!
That's my one grief with tsc: it does too many different things.
- Type checking
- Transpiling to multiple formats (CJS/ESM, JSX and the like)
- Generating declaration files (.d.ts)
While there are modern alternatives for transpiling, much faster than tsc (esbuild, swc), typechecking and declaration generation are still very slow operations.
Is there some command you can run to rewrite all the JSDoc as actual TypeScript?
Programmatically yes: https://github.com/microsoft/TypeScript/blob/28cd1fbd13b8d09...
You just need to call this from somewhere like ts-morph.
JSDoc can get you pretty far, but it can be clumsy sometimes. There’s a [TC39 proposal](https://github.com/tc39/proposal-type-annotations) to allow types to live in JS code and be treated as comments (similar with Python types today)
Could that improve execution performance?
No. First paragraph of the proposal:
> This proposal aims to enable developers to add type annotations to their JavaScript code, allowing those annotations to be checked by a type checker that is external to JavaScript. At runtime, a JavaScript engine ignores them, treating the types as comments.
The problem is handling errors. If the engine does a full analysis of every code path to ensure the types aren't violated, there's a startup penalty - this includes every time a dynamic import is used, which could interfere with the user as they interact with the page. I'm not entirely sure that such an analysis would be entirely possible- there are many ways to add layers of indirection to JavaScript code.
If the engine doesn't do a full analysis up front but instead checks every time a type is used, you end up with, at best, roughly the same performance characteristics that current JIT engines already have.
> If the engine doesn't do a full analysis up front but instead checks every time a type is used, you end up with, at best, roughly the same performance characteristics that current JIT engines already have.
There are definitely optimizations that JIT engines can do, pre-optimizing the "shapes" expected of objects, especially with respect to rare/optional parameters. In some of the current JITs an extra parameter that hasn't been used before/in-a-while drops code from a faster path to a slower one. Knowing ahead of time that parameter might show up eventually can lead to the faster path in more cases. You still get slow path penalties for type violations and other unexpected shapes, but that benefit of more "shapes" hitting the fast path sooner (less "learning time" for the expected shapes by the JIT, because it can assume the type is a correct description) may overall be a benefit over untyped performance.
Worst case, yeah, it is "roughly" the exact same performance characteristics, but best case it does buy some performance benefits.
That said, yeah the current proposal to TC-39 does not include that for a number of good reasons and it would need follow up proposals to provide type semantics that JITs could count on. Though if the first proposal succeeds, such follow up proposals become more likely. (We are seeing that a bit in Python as some follow up PEPs have converged some of the base type semantics, though still not yet with runtime performance in mind.)
Does this mean the JIT engine will be compiling code that it would not have otherwise? That in itself might end up being a small penalty, given that so much code is never a hot path anyway.
If not, I'm not seeing the type annotation adding value that the engine doesn't already have from existing runs.
It should mean compiling (a lot) less code in the best cases. (JITs compile early and often.) Roughly, today:
1. JIT sees a function take a lot of objects {x: int, y: int}
2. JIT compiles a hot path of that function for {x: int, y: int}
3. JIT sees an object of {x: int, y: int, z: int}, goes to a slow uncompiled (deoptimized) path of that function
4. Over time JIT sees a bunch more of {x: int, y: int, z: int} and compiles a hot path for that function
5. Over time the JIT sees that the compiled hot paths of {x: int, y: int} and {x: int, y: int, z: int} share a bunch of code and get called roughly evenly and further compiles an even more optimized shared hot path of {x: int, y: int, z?: int}
Note that this isn't the case in every JIT, or every runtime and mileage always varies when talking about JIT optimizations, but that's a roughly common way to look at that.
In theory, knowing ahead of time that expected/preferred shape is {x: int, y: int, z?: int}, the JIT could skip steps 1-4, start from step 5, just one "perfect" hot path for the most common expected object shapes, and see fast code for every {x: int, y: int} and {x: int, y: int, z: int} object the function takes, right from "the beginning" of run time.
(It might still fall back to a deoptimized path for a strange, rare {x: string, y: int} or something like that, but it still has a better hot path for what should be the more common/likely arguments. Which is why worst case and possibly average case having type knowledge doesn't perform better than existing JITs. But it can still enhance the best case.)
(ETA: Of course, Step 0 is determining that function is on a hot path in the first place. That is assumed to be the same in both cases with/without type information.)
Runtime type checks usually decrease the performance.
Right, runtime type checks do...and JavaScript has to do them pervasively. Hence OP's question - could these type annotations eliminate the need for type checks at real time by making type assertions static (the answer is no, but that was the question).
AFAIK that's not a goal for this proposal.
JSDoc comments are definitely not the “best of both worlds,” they are a limited and verbose subset of typing. I would only use them for specific reasons, like I’m in a hostile ecosystem that dogmatically doesn’t want any build steps (like WebGL/WebGPU engines).
It’s also really nice to have annotations describing what a function or object does, consuming a library is considerately worse without them, as they often prevent the need for looking up documentation, and they can even link to the right documentation page (although that is very rare).
When I write them I don’t annotate types though, that’s already done by typescript, and editors like vscode even gives you type hints in a js file if it detects d.ts files in the source.
I've found a happy medium to be annotating functions etc with JSDoc style comments but keeping interfaces etc. in a .d.ts file. The Typescript compiler is able to import it from a .js file just fine.
The one frustrating thing is that you can’t just use .d.ts files in a project to define the full types for their corresponding .js module, without importing each type def individually. And assigning imported type defs to classes is severely limited and confusing. It’s a shame, because the same structure/approach ~just works for packages installed under node_modules.
Agree, any complex types just get placed in their own `.d.ts` file, and import that into JSDoc comments
/// Top of file usually /** * @typedef {import('./typedefs/MyThing')} MyThing */ /// Later /** * @param {MyThing} x */ function(x) { // }Worth noting that you can also do this from plain .js files.
The thing being imported can be a JS thing like an exported class, or a JSDoc type that's defined with @typedef./** @param { import('./foo.js').SomeType } arg */ function doThing(arg) { // ... }
This is the way.
Some of the limitations listed are wrong:
> The main limitation is that you don't have access to some TypeScript-specific syntax.
> * as, also known as type assertions (or casts)
You generally shouldn’t (as the article notes), but you can:
const foo = /** @type {Bar} */ ({
something: {
satisfying: 'Bar',
},
});
Note: the parentheses are necessary, they are the JS/JSDoc mechanism for type casts.This also works for `as const`, which is often better when you can use readonly types:
const foo = /** @type {const} */ ({
something: 'else',
});
// { readonly something: 'else' }
Better still, the satisfies operator also works (its JSDoc support lagged a bit though): /** @satisfies {Bar} */
const foo = …;
This will infer the type of `foo` and check it for assignability to `Bar`.> * is, also known as type predicates
This definitely works:
/**
* @param {unknown} value
* @return {value is Foo}
*/
const isFoo = (value) => …
You can also define the guard as a standalone/assignable type: /**
* @callback IsFoo
* @param {unknown} value
* @return {value is Foo}
*/
The @callback tag is ~equivalent to a type alias for a function signature.Also useful, if needed, the @template tag which is roughly equivalent to TS type parameters (generics) which you can use, for example, to assign generic type guards:
/**
* @template T
* @callback IsT
* @param {unknown} value
* @return {value is T}
*/
/** @type {IsT<Foo>} */
const isFoo = (value) => …
[Disclaimer: typed this comment on my phone from memory, apologies for any typos or mistakes!]I'm using this setup at the moment, mostly because I work on legacy code base that has no good types yet.
Typically you would do jsconfig.json though, which is same as tsconfig.json but allowJs is already set. Mostly stylistic change, but some tooling might benefit for having jsconfig instead of tsconfig.
Yeah, all three settings this article mentions at the top are correctly defaulted when using a jsconfig.json file instead of a tsconfig.json file: allowJs, checkJs, and noEmit.
Additionally, I don't know about other editors but I know that VS Code subtly adjusts some workspace detail defaults based on the presence of jsconfig.json over tsconfig.json.
It's slightly more than just a stylistic change, but yeah there's no "wrong answer" if you prefer tsconfig.json to jsconfig.json, especially if you think there may be a subset of files you wind up preferring TS syntax and want to transpile/type-strip in the future (dropping the "noEmit"). (If for some reason you need to do a bunch of generics, for instance, that's a lot easier with type-stripping from TS syntax than trying to squeeze in JSDoc. Also, maybe you'll bump into a situation where TS downleveling is helpful, for a while I had projects that needed downlevelIteration to support certain IE/Safari versions and letting TS downlevel that was a lot easier and cleaner than the Babel-based alternatives.)
// @ts-check works until you need to do an assertion, a cast, a predicate, pass an explicit type to a generic method, etc., then it gets very annoying. I only ever use it if I’m writing a single self-contained and relatively short script and can’t be bothered with a build step.
Does anyone know if I have a TypeScript project, is there an automated process to turn it into this format this article describes? Eject for TS, I guess.
You can probably do this with the typescript compiler API https://github.com/microsoft/TypeScript/wiki/Using-the-Compi...
Kinda seems like the dreamiest build target.
Typescript is a superset of Js/tsdocs so you’ll probably get a long way but at some point the types will be dumbed down.
GPT-4 would probably do a good job at that if the code is not sensitive
Is there some way to load an entire large project into gpt4? Or do you mean to have it translate one file at a time without context?
This is exactly how I use typescript, as a linter and static type analyzer for javascript. Unfortunately, typescript support for JSdoc is quite basic. So that method has its limit. But I'm not committing any typescript code.
Someone here just suggested to import type definition files with JS doc annotations, and that's actually a good idea.
Right on time as I've been trying to add some type safety onto my OSS project. Does anyone have some extra info on how to best leverage JSDoc to write safe(r) JS code?
I thought this was going to be a project like ts-node [1]
Confused about the mentions of Elm. What does this have to do with Elm?
It's an Elm blog. It starts off with saying that even when writing Elm, you still have to drop in to Javascript.
Okay... that's not saying much though? Why even mention it? I don't know anything about Elm but this looks like a blog post about JS, not Elm, but it's talking about Elm right from the start as if that's an important distinction to make here.
It's like those recipe blogs that have to give you their life story before telling you how to make a meatball. Just get to the damn point and skip the bullshit, please.
The website is called "incrementalelm.com".
I'm more confused because I thought the first rule of Elm was that you'll never meet anyone that uses it