If you're not following recent LLM/AI developments, Agent Skills are the new hotness.
The idea is:
- The LLM receives a list of all available skill names and their descriptions at the start of every chat/session so they know what's available.
- Inside the SKILL.md you leverage markdown structure (headings) and links to other markdown files (in the skill
./references/directory) - The agent/LLM progressively requests/reads more detailed information so as to not pollute the context window out of the gate.
That's basically it – and it has a website and specification. It's simple and elegant, and should be easy for everyone to use in their coding harnesses and development tools.
There are already tens of thousands of these skills available (36,699 as of this post), some of which are carefully crafted and useful, and some of which I suspect would lower the quality of any codebase you apply them to.
Seeing the variety of skills, and assessing some was a reminder of how much experience, knowledge, and preference vary. Everyone has a different opinion on the right way to make software. Some of which I am positively certain are wrong.
Skills are just an extension of this existing divide, so be mindful in what you pull in, and whose thoughts you let steer your work.
LLMs with Zig vs. Others
I've found that there's a pretty large difference between what you can do with minimal effort in TypeScript vs. Zig vs. Swift with LLMs.
I can "one-shot" pretty elaborate things in TypeScript or Swift, assuming I specify things well and take my time writing the prompt.
I did this prototype App in Swift recently:


The code isn't perfect, but it's pretty damn decent, probably thanks to a well-made SwiftUI and Swift Concurrency skill.
It took a little back and forth, but I did this in a single evening after working all day. The package source is only ~290 lines of Swift, and doesn't look much different from anything I'd write, just has some (in my opinion) excessive comments.
Here's the prompt I used:
I want you to create a Swift Package that uses https://swiftpackageindex.com/ml-explore/mlx-swift-lm/main/documentation/mlxembedders to load and provide an interface for the voyager-nano embedding model. I plan to use it in the future to create embeddings for vector search.
I converted the model to mlx and it's all in @python-mlx-convert-model/ . Please use the Demo project to use this new Swift Package, and demo all of its features including embedding generation / configuration, and if possible, cosine similarity search of two TextEditor panes
As you can see, not much was required to get it to do what I wanted.
But using Zig with LLMs proves to be much more difficult. The language is still changing (though not as much as it used to), and there is of course less Zig code out there. However, the big thing I think is hurting Zig currently is that I don't think anyone is doing Reinforcement Learning from Verifiable Rewards (RLVR) on the language. I am guessing they do this for TypeScript, Python, and probably even Swift (maybe Anthropic trying to land that sweet Siri/Apple contract) to hone it on valid syntax. I'm purely speculating, but it makes sense on the language divide. Swift used to be much worse to use with LLMs until recently.
This all leads to constant friction using Zig with LLMs – they constantly use deprecated or removed language features because the weights for the right syntax haven't been turned up. So we'll have to take matters into our own hands, and try to offer up some good code into the context window, so our prompt leads the model toward the right thing.
Building a Skill, the right? way
I had the misfortune of submitting someone else's AI generated Zig Book to HackerNews, which the author claimed was completely written by hand. The submission got very popular and was at the top of the site for most of the day before it was ultimately flagged and hidden. Before submitting it I tried doing my due diligence, looking at chapters of the book I felt competent to assess, and felt it looked correct and well made. Unfortunately as the day went on and I and others dug deeper, it became clear it was almost entirely generated by AI. At first I thought the amount of effort must have been substantial, but as I've learned more about Agents I see how one can actually do these things quickly (though sloppily).
I want to do something better here, but be honest and make it clear that this is all AI/LLM generated. The amount of Zig I write has dropped substantially due to changing jobs ~2 years ago, so I don't think I'm highly qualified to judge the output quality. But, based on what I remember/know, it's surprisingly good, and no one is writing docs on the standard library because it's expected to change. So I hope this helps, and I think it's a good use of AI/LLMs, to do some tedious work no one wants to do because it will be outdated in ~3 months, but is still useful today.
Methodology
My thinking on building the skill is that we should try and keep the model grounded by always using the Zig source code as context.
I used Opus 4.5 exclusively, and basically spoon-fed it short prompts asking it to document each module of the Zig Standard Library. I also generated a version of the Language Reference in markdown to be able to feed it in where appropriate. I used the skill-creator from Anthropic to help structure the content for agents/LLMs.
It took probably ~8 hours (lots of which were waiting for limits to reset). I want to release it because I think the methodology is sound, and one could do much worse trying to do this quickly. I'm sure it's not perfect, but hopefully it's decent and provides value to you (I show some example usage below).
You can find the skill on GitHub.
I did many iterations of prompts like:
Using the skill-creator skill, please help me improve my zig skill by adding documentation using skill best practices for std.compress . You can view the actual code at @std/compress.zig and @std/compress/
Sometimes I would notice something not being clear, and ask it to take in other resources like the Release Notes when I knew they addressed the relevant module:
Using the skill-creator skill, please help me improve my zig skill by reviewing the writer and io sections. Some changes are covered by @ziglang.org_download_0.15.1_release-notes.html.2026-01-30T05_07_12.697Z.md . The @std/Io.zig and @std.Io/ files are the source of truth on their usage.
I also want to update the @SKILL.md section about Writergate, and @reference/patterns.md to make sure they are accurate and helpful. The unit tests of the Zig std are probably a great reference.
And some review passes like this (warning – when you leave it open-ended like this the LLM tends to do stupid stuff):
Using the skill-creator skill, please help me improve my zig skill by reviewing it, along with @langref-0.15.2.md and see if there any modifications we should make that would help it be more effective
And for the Zig patterns I used:
Using the skill-creator skill, please help me improve my zig skill by adding the patterns identified in @ziggit.dev_t_code-patterns_1748.2026-01-31T19_55_15.369Z.md . You can map the referenced links to the @0.15.2/ directory. So the https://codeberg.org/ziglang/zig/src/tag/0.15.2/lib/std/hash_map.zig#L135-L140 link points to @0.15.2/lib/std/hash_map.zig lines 135-140.
Links to https://ziglang.org/documentation/0.15.2/ are the document at @langref-0.15.2.md and the release notes for 0.14.0 are at @ziglang.org_download_0.14.0_release-notes.html.2026-01-30T05_04_22.199Z.md, and 0.15.1 is at @ziglang.org_download_0.15.1_release-notes.html.2026-01-30T05_07_12.697Z.md .
I want you to look at each referenced section, and use it to understand the pattern provided, and include it in the skill so it can be used. These patterns are key to idiomatic Zig code, so it's important we understand and explain them accurately, with valid Zig code.
Followup:
I think you kind of missed the plot on a few of these, and how they are generic patterns that a user could apply to their own code. I like the specific examples, that part is fine. But think about how they could be applied. In particular:
- The Context pattern as presented is limited and doesn't explain how I could use or apply it to different problems, and why
How it turned out
- The SKILL.md is 357 lines, well under the suggested 500 line limit.
- The ./references/ total to 21,495 lines, split across 51 files.
Here's a chunk of the SKILL.md: 
And references/std-allocators.md (std.heap): 
And references/std-arraylist.md: 
As you can see it's very thorough, and covers pretty much everything. LLMs aren't perfect at invoking skills yet, so I think it takes a little practice / knowledge to nudge them toward using them. But I expect that will change as they do RLHF like they did with tools, but for using skills.
Usage
Download the skill, drop the zig directory into ~/.claude/skills per Claude Skill Docs.
See also: agent skills specification
An example prompt you might use to invoke the skill (unfortunately it seems required that you explicitly call out the skill as they are relatively new, and otherwise often go unused/unread):
Using the zig skill please review @src/ and help me improve the codebase. I think there are a lot of patterns I could be using, and my use of allocators isn't great.
I ran this on a very poor quality codebase I vibe coded (the language reference parser referenced above), once with the skill, and once without. Using Opus 4.5 in thinking mode via Claude Code CLI.
With the skill it:
- fixed a ton of old ArrayList initialization code to use
.empty - switched to using an arena allocator for a bunch of temporary allocations
Stuff it only knew because it read the skill patterns.md.
I did the same on a codebase I hand wrote several years ago, once with the skill, once without, both times using plan mode. With the skill, the code had the following improvements:
- It added proper error handling to a place I lazily used
.?. I had actually been bitten by a schema change here not long ago, and it was hard to track down, so this was an obvious improvement and something I should have thought of but didn't.
Original:
pub fn fromHashmap(hash_map: std.StringHashMap([]const u8)) !Self {
return .{
.name = hash_map.get("name").?,
.author = hash_map.get("author") orelse "Unknown",
.variant = hash_map.get("variant").?,
.base00 = hash_map.get("base00").?,
.base01 = hash_map.get("base01").?,
.base02 = hash_map.get("base02").?,
.base03 = hash_map.get("base03").?,
.base04 = hash_map.get("base04").?,
.base05 = hash_map.get("base05").?,
.base06 = hash_map.get("base06").?,
.base07 = hash_map.get("base07").?,
.base08 = hash_map.get("base08").?,
.base09 = hash_map.get("base09").?,
.base0A = hash_map.get("base0A").?,
.base0B = hash_map.get("base0B").?,
.base0C = hash_map.get("base0C").?,
.base0D = hash_map.get("base0D").?,
.base0E = hash_map.get("base0E").?,
.base0F = hash_map.get("base0F").?,
};
}Without Skill:
pub fn fromHashmap(hash_map: std.StringHashMap([]const u8)) ParseError!Self {
return .{
.name = hash_map.get("name") orelse return ParseError.MissingRequiredField,
.author = hash_map.get("author") orelse "Unknown",
.variant = hash_map.get("variant") orelse return ParseError.MissingRequiredField,
.base00 = hash_map.get("base00") orelse return ParseError.MissingRequiredField,
.base01 = hash_map.get("base01") orelse return ParseError.MissingRequiredField,
.base02 = hash_map.get("base02") orelse return ParseError.MissingRequiredField,
.base03 = hash_map.get("base03") orelse return ParseError.MissingRequiredField,
.base04 = hash_map.get("base04") orelse return ParseError.MissingRequiredField,
.base05 = hash_map.get("base05") orelse return ParseError.MissingRequiredField,
.base06 = hash_map.get("base06") orelse return ParseError.MissingRequiredField,
.base07 = hash_map.get("base07") orelse return ParseError.MissingRequiredField,
.base08 = hash_map.get("base08") orelse return ParseError.MissingRequiredField,
.base09 = hash_map.get("base09") orelse return ParseError.MissingRequiredField,
.base0A = hash_map.get("base0A") orelse return ParseError.MissingRequiredField,
.base0B = hash_map.get("base0B") orelse return ParseError.MissingRequiredField,
.base0C = hash_map.get("base0C") orelse return ParseError.MissingRequiredField,
.base0D = hash_map.get("base0D") orelse return ParseError.MissingRequiredField,
.base0E = hash_map.get("base0E") orelse return ParseError.MissingRequiredField,
.base0F = hash_map.get("base0F") orelse return ParseError.MissingRequiredField,
};
}With Skill:
pub fn fromHashmap(hash_map: std.StringHashMap([]const u8)) !Self {
return .{
.name = hash_map.get("name") orelse return error.MissingName,
.author = hash_map.get("author") orelse "Unknown",
.variant = hash_map.get("variant") orelse return error.MissingVariant,
.base00 = hash_map.get("base00") orelse return error.MissingBase00,
.base01 = hash_map.get("base01") orelse return error.MissingBase01,
.base02 = hash_map.get("base02") orelse return error.MissingBase02,
.base03 = hash_map.get("base03") orelse return error.MissingBase03,
.base04 = hash_map.get("base04") orelse return error.MissingBase04,
.base05 = hash_map.get("base05") orelse return error.MissingBase05,
.base06 = hash_map.get("base06") orelse return error.MissingBase06,
.base07 = hash_map.get("base07") orelse return error.MissingBase07,
.base08 = hash_map.get("base08") orelse return error.MissingBase08,
.base09 = hash_map.get("base09") orelse return error.MissingBase09,
.base0A = hash_map.get("base0A") orelse return error.MissingBase0A,
.base0B = hash_map.get("base0B") orelse return error.MissingBase0B,
.base0C = hash_map.get("base0C") orelse return error.MissingBase0C,
.base0D = hash_map.get("base0D") orelse return error.MissingBase0D,
.base0E = hash_map.get("base0E") orelse return error.MissingBase0E,
.base0F = hash_map.get("base0F") orelse return error.MissingBase0F,
};
}- I have a rough set of functions that output the theme colors based on what language we're outputting to (either Zig or Swift)
Original
// For Swift output
fn generateVariableString(allocator: std.mem.Allocator, name: []const u8, hex_string: []const u8, trailing_comma: bool) ![]const u8 {
// Calculate the value for normalized float r/g/b and store that string too
const r = try std.fmt.parseUnsigned(u8, hex_string[0..2], 16);
const g = try std.fmt.parseUnsigned(u8, hex_string[2..4], 16);
const b = try std.fmt.parseUnsigned(u8, hex_string[4..6], 16);
// Turn the u8 color values into normalized f32s
const ftheme_color: [3]f32 = .{
@as(f32, @floatFromInt(r)) / 255.0,
@as(f32, @floatFromInt(g)) / 255.0,
@as(f32, @floatFromInt(b)) / 255.0,
};
const float_string = try std.fmt.allocPrint(allocator, "Color.init(red: {d}, green: {d}, blue: {d})", .{ ftheme_color[0], ftheme_color[1], ftheme_color[2] });
if (trailing_comma) {
return std.mem.concat(allocator, u8, &.{ " ", name, " : ", float_string, ",", " //#", hex_string, "\n" }) catch unreachable;
} else {
return std.mem.concat(allocator, u8, &.{ " ", name, " : ", float_string, " //#", hex_string, "\n" }) catch unreachable;
}
}
// For Zig output
fn generateFloatVariableString(allocator: std.mem.Allocator, name: []const u8, hex_string: []const u8, trailing_comma: bool) ![]const u8 {
// Calculate the value for normalized float r/g/b and store that string too
const r = try std.fmt.parseUnsigned(u8, hex_string[0..2], 16);
const g = try std.fmt.parseUnsigned(u8, hex_string[2..4], 16);
const b = try std.fmt.parseUnsigned(u8, hex_string[4..6], 16);
// Turn the u8 color values into normalized f32s
const ftheme_color: [3]f32 = .{
@as(f32, @floatFromInt(r)) / 255.0,
@as(f32, @floatFromInt(g)) / 255.0,
@as(f32, @floatFromInt(b)) / 255.0,
};
const float_string = try std.fmt.allocPrint(allocator, ".{{.r = {d}, .g = {d}, .b = {d}}}", .{ ftheme_color[0], ftheme_color[1], ftheme_color[2] });
if (trailing_comma) {
return std.mem.concat(allocator, u8, &.{ " .", name, " = ", float_string, ",", " //#", hex_string, "\n" }) catch unreachable;
} else {
return std.mem.concat(allocator, u8, &.{ " .", name, " = ", float_string, " //#", hex_string, "\n" }) catch unreachable;
}
}
Without Skill (no real changes, switched to using allocPrint):
fn generateVariableString(allocator: std.mem.Allocator, name: []const u8, hex_string: []const u8, trailing_comma: bool) ![]const u8 {
// Calculate the value for normalized float r/g/b and store that string too
const r = try std.fmt.parseUnsigned(u8, hex_string[0..2], 16);
const g = try std.fmt.parseUnsigned(u8, hex_string[2..4], 16);
const b = try std.fmt.parseUnsigned(u8, hex_string[4..6], 16);
// Turn the u8 color values into normalized f32s
const ftheme_color: [3]f32 = .{
@as(f32, @floatFromInt(r)) / 255.0,
@as(f32, @floatFromInt(g)) / 255.0,
@as(f32, @floatFromInt(b)) / 255.0,
};
const float_string = try std.fmt.allocPrint(allocator, "Color.init(red: {d}, green: {d}, blue: {d})", .{ ftheme_color[0], ftheme_color[1], ftheme_color[2] });
if (trailing_comma) {
return std.fmt.allocPrint(allocator, " {s} : Color.init(red: {d}, green: {d}, blue: {d}), //#{s}\n", .{ name, ftheme_color[0], ftheme_color[1], ftheme_color[2], hex_string });
} else {
return std.fmt.allocPrint(allocator, " {s} : Color.init(red: {d}, green: {d}, blue: {d}) //#{s}\n", .{ name, ftheme_color[0], ftheme_color[1], ftheme_color[2], hex_string });
}
}
// For Zig output
fn generateFloatVariableString(allocator: std.mem.Allocator, name: []const u8, hex_string: []const u8, trailing_comma: bool) ![]const u8 {
// Calculate the value for normalized float r/g/b and store that string too
const r = try std.fmt.parseUnsigned(u8, hex_string[0..2], 16);
const g = try std.fmt.parseUnsigned(u8, hex_string[2..4], 16);
const b = try std.fmt.parseUnsigned(u8, hex_string[4..6], 16);
// Turn the u8 color values into normalized f32s
const ftheme_color: [3]f32 = .{
@as(f32, @floatFromInt(r)) / 255.0,
@as(f32, @floatFromInt(g)) / 255.0,
@as(f32, @floatFromInt(b)) / 255.0,
};
const float_string = try std.fmt.allocPrint(allocator, ".{{.r = {d}, .g = {d}, .b = {d}}}", .{ ftheme_color[0], ftheme_color[1], ftheme_color[2] });
if (trailing_comma) {
return std.fmt.allocPrint(allocator, " .{s} = .{{.r = {d}, .g = {d}, .b = {d}}}, //#{s}\n", .{ name, ftheme_color[0], ftheme_color[1], ftheme_color[2], hex_string });
} else {
return std.fmt.allocPrint(allocator, " .{s} = .{{.r = {d}, .g = {d}, .b = {d}}} //#{s}\n", .{ name, ftheme_color[0], ftheme_color[1], ftheme_color[2], hex_string });
}
}With Skill:
const OutputFormat = enum { swift, zig };
fn parseHexToRgbFloats(hex_string: []const u8) ![3]f32 {
const r = try std.fmt.parseUnsigned(u8, hex_string[0..2], 16);
const g = try std.fmt.parseUnsigned(u8, hex_string[2..4], 16);
const b = try std.fmt.parseUnsigned(u8, hex_string[4..6], 16);
return .{
@as(f32, @floatFromInt(r)) / 255.0,
@as(f32, @floatFromInt(g)) / 255.0,
@as(f32, @floatFromInt(b)) / 255.0,
};
}
fn generateColorVariableString(allocator: std.mem.Allocator, name: []const u8, hex_string: []const u8, trailing_comma: bool, format: OutputFormat) ![]const u8 {
const rgb = try parseHexToRgbFloats(hex_string);
const comma = if (trailing_comma) "," else "";
return switch (format) {
.swift => try std.fmt.allocPrint(
allocator,
" {s} : Color.init(red: {d}, green: {d}, blue: {d}){s} //#{s}\n",
.{ name, rgb[0], rgb[1], rgb[2], comma, hex_string },
),
.zig => try std.fmt.allocPrint(
allocator,
" .{s} = .{{.r = {d}, .g = {d}, .b = {d}}}{s} //#{s}\n",
.{ name, rgb[0], rgb[1], rgb[2], comma, hex_string },
),
};
}As you can see, much more concise and easier to understand.
I wondered what would happen if I just re-ran the prompt again, not calling out allocators:
Using the zig skill please review @src/ and help me improve the codebase
This netted a ton of improvements. The nicest one is below:
Original (Lord forgive me)
pub fn slugify(allocator: std.mem.Allocator, name: []const u8) ![]const u8 {
var map = std.StringHashMap([]const u8).init(allocator);
try map.put("-", "");
try map.put(" ", "");
try map.put("(", "");
try map.put(")", "");
try map.put(".", "");
try map.put(",", "");
try map.put("+", "Plus");
const slugified = replaceAllOccurrences(allocator, name, map);
return slugified;
}
fn replaceAllOccurrences(allocator: std.mem.Allocator, input: []const u8, replacements: std.StringHashMap([]const u8)) ![]const u8 {
var output_buffer = try allocator.alloc(u8, input.len);
var insertion_index: usize = 0;
for (input) |c| {
if (replacements.get(&[_]u8{c})) |replacement_string| {
if (std.mem.eql(u8, replacement_string, "")) {
// NOTE: We don't increment the insertion_index, effectively ignoring/removing the char, since we only return the slice up to insertion_index
} else {
const length_delta = replacement_string.len - 1;
if (length_delta > 0) {
output_buffer = try allocator.realloc(output_buffer, output_buffer.len + length_delta);
}
@memcpy(output_buffer[insertion_index..], replacement_string);
insertion_index += length_delta;
}
} else {
output_buffer[insertion_index] = c;
insertion_index += 1;
}
}
// NOTE: Index up to the insertion index, ignoring any extra chars we didn't copy over to output_buffer
return output_buffer[0..insertion_index];
}After:
pub fn slugify(allocator: std.mem.Allocator, name: []const u8) ![]const u8 {
var list: std.ArrayList(u8) = .empty;
for (name) |c| {
switch (c) {
'-', ' ', '(', ')', '.', ',' => {}, // skip these characters
'+' => try list.appendSlice(allocator, "Plus"),
else => try list.append(allocator, c),
}
}
return list.toOwnedSlice(allocator);
}
test "slugify removes special characters" {
const result = try slugify(std.testing.allocator, "Hello World (test)");
defer std.testing.allocator.free(result);
try std.testing.expectEqualStrings("HelloWorldtest", result);
}
test "slugify converts plus to Plus" {
const result = try slugify(std.testing.allocator, "C++");
defer std.testing.allocator.free(result);
try std.testing.expectEqualStrings("CPlusPlus", result);
}Quite an improvement. This is much clearer than my original code, and it dumped out some decent tests too. Also not the correct std.ArrayList usage
My original code didn't follow the proper Zig convention of having Alloc in the name to signify that the caller owns the memory. I checked and there's nothing in the skill to tell it that (due to it not being mentioned in the Language Reference), so that's another thing I want to add. Maybe a style guide and conventions section that covers this sort of thing.
A better version would be:
/// Converts name to slug. Caller owns returned memory.
pub fn slugifyAlloc(allocator: std.mem.Allocator, name: []const u8) ![]const u8 {
var list: std.ArrayList(u8) = .empty;
errdefer list.deinit(allocator);
for (name) |c| {
switch (c) {
'-', ' ', '(', ')', '.', ',' => {},
'+' => try list.appendSlice(allocator, "Plus"),
else => try list.append(allocator, c),
}
}
return list.toOwnedSlice(allocator);
}Getting it to do this should be pretty easy with an additional reference (style-and-conventions.md?).
Shortcomings
Based purely on how the prompting went, I'd guess the build system section is rough. I tried giving it a few resources that in hindsight didn't include the code examples as I intended. The patterns reference also had some churn so there might be some imprecision/inconsistencies.
Someone I trust was nice enough to take a look at an area they have expertise in (the atomics reference), and while the code examples were correct, they found some slight errors in the explanations. They thought it was only slightly worse than what you would find in your typical blog post. This is also probably the unavoidable imprecision of language and the nature of these things.
On the bright side we have the entire standard library, and the language reference, all at the LLMs fingertips. And it's all up to date.
Future Work
If the quality seems reasonable after more eyes are on it, I'll try and do a followup 0.16.0 release when it comes out, and probably improve the prompts after I've used and thought about this for a bit longer.
Pipeline
I used Claude Code (and Opus 4.5) exclusively since I have a subscription, but it would be better to turn this into a real pipeline, breaking down the Zig source and documenting it in a more systematic way.
I've built a few components of this out in an earlier project, where I index the entire source code up front and have a way to search it, but I am apprehensive of trying the full process this way due to token costs via the API.
Ziglings
I wanted to try giving it Ziglings and have it pull out "lessons" from each diff, to try and incorporate that knowledge. Also, the per-version ziglang changes list is nice and would be easy to incorporate.
Tools and other stuff
It would be neat to expose a tool, maybe leveraging autodoc (which powers https://ziglang.org/documentation/0.15.2/std/) to give the skill a "std documentation lookup" skill. I don't think tool use is great today, but it would probably be wise for the future as it improves. A tool to just grab source files for the standard library would probably also be useful.
I'd also love to expose the Zig parser so the LLM could quickly check if syntax is valid. Maybe this could simply be done through improved instructions in the skill, guiding the LLM to regularly check syntax since the compiler is so fast.
Release Notes and Language Reference
I could have done a better job cleaning up the Language Reference. I wrote a very sloppy program to take the custom HTML format Andrew Kelley / the Zig Core Team uses for writing them and converts it to markdown, but it's not quite perfect (weird spacing, nothing major, but probably still hurts a little).
I also left a lot of content in the Release Notes that didn't need to be there (List of Supporters, etc.), and could have had an effect on output.
If I do the process again I think I'll clean those up first.
Feedback / Contributing
Please report any issues, no worries about writing a giant report with a fix, I just want to know all the shortcomings and hallucinations. I'm hoping they're pretty minimal, and the feedback will help me improve the process if I do another pass or release.
I'd also happily take feedback on the prompts. I didn't spend much time there, and I'm sure putting some more thought into them would change/improve the output substantially. The standard library has really good test coverage, so I do wish I would have explicitly asked it to find invariants and usage examples that way.
The Zig Community and AI
The Zig community really dislikes AI. I think this tracks with other communities around systems languages, game development, and really anything "low-level". These communities have always been misunderstood and under attack from groups working at higher-levels of abstraction, who don't see the value in going deeper.
I worry Zig and the other new languages (Odin, Jai, C3) will see less adoption as the productivity gap with LLMs widens. I think these communities happily snub AI usage, as they see it as yet another attack on the craft of producing working, quality software. While I think that viewpoint is correct today, I don't think it will be for long.
As I use agent loops more and more, it's clear to me that they will become another tool in the toolbelt at minimum, offering various advantages to those who can tame them. And I should mention too, that they definitely do rot the brain a bit.
We've mechanized language like we did with doing math quickly when computers were first invented, and now processing text at a stupid rate for stupid cheap is the new norm. There will be ways to find value doing that for our profession, it's undeniable. At minimum it's going to be relentlessly chewing through logs and doing code review.
The careful act of thinking through systems and writing code is what Ziguanas take pride in, and I don't think that's going anywhere though.
Conclusion
Overall I'm pretty happy with how this works. These are the kind of improvements I wouldn't think of unless I happened to have recently read a blog post about the specific pattern. So something to scan over the complete codebase and call these out works well for me. I review and throw away anything I don't like.
There's still the non-deterministic slot machine aspect, but I think reading through all the diffs, and being able to invoke the agent/skill multiple times helps. I can suggest things I know are issues, and it reads in all the relative references to actually do the right thing saving me from digging through the standard library each step.
I'll probably get called out for doing something dumb in the examples I shared, or missing some sins the AI made, but it's hard to look at the output and not agree it's better than the code I originally wrote.
Thanks
Thank you to everyone who has worked on Zig. Thank you tensorush for compiling Zig Code Patterns. I didn't know about the patterns list until I stumbled across it making this, but have always wanted a resource like that, and actually worked on having a pipeline to iterate over the Zig source and pull patterns like that out.
Also thanks to Protty for taking a look at a section, providing corrections and helping me gauge/understand the quality.
PS: If you're working on developer tools and need someone who thinks about this stuff too much, please let me know.
