It's finally here. AdonisJS v7 is out.
This is an incremental upgrade to v6 with minimal breaking changes, but meaningful improvements across the framework. We updated 45+ official packages and introduced 3 new ones (@adonisjs/otel, @adonisjs/content, and edge-markdown), and strengthened the core foundation for the next phase of AdonisJS.
The highlight of this release is the work done towards end-to-end type safety. We'll walk through it in detail in this article.
If you'd like to explore things before continuing, here are some important links:
- Upgrade guide to migrate a v6 application to v7
- v6 docs if you need time before you can migrate
- New website with a fresh look
- New docs with a dedicated Start section and an official tutorial covering every starter kit
- Starter kits with better defaults and functional login and signup flows
A modern foundation
v7 requires Node.js 24 or above. As of October 2025, Node.js 24 became the LTS version, and all framework packages have been updated to work with the features available in it.
Moving along with the platform allows us to eliminate unwanted dependencies and utilize newer APIs. For example:
- We replaced
dotenvwith Node's nativeutil.parseEnv. - UUID generation now uses
crypto.randomUUID. - File system operations use the native
globandfsutilities. - Our TypeScript JIT compiler loader hook relies on certain properties that are only available in Node 24 and above.
Before creating a new v7 app, ensure you are running Node.js 24 or above.
Beyond Node.js, v7 upgrades the entire tooling stack. The framework now works with Vite 7 for frontend builds, the latest ESLint for linting, and the latest TypeScript for type checking.
New JIT TypeScript compiler
We have replaced ts-node with a new in-house package called ts-exec
, a lightweight JIT compiler for Node.js applications built on top of SWC (a fast Rust-based TypeScript compiler).
At roughly 15 KB and 400 lines of code, ts-exec focuses on doing one thing well: executing TypeScript directly without writing compiled files to disk. It targets Node.js 24 and above, is used only during development, and can be adopted by any Node.js project, not just AdonisJS.
Starter kits
v7 ships with completely reworked starter kits. In v6, creating a new project involved a series of prompts asking whether you wanted Lucid, Auth, and other packages. The result was a bare scaffold that left you to wire up authentication, configure session management, and set up your frontend tooling from scratch.
v7 starter kits are opinionated and functional out of the box. Every kit includes Lucid and Auth. Every kit ships with a working login and signup flow. You start with a running application, not an empty shell.
npm init adonisjs@latest my-app
You pick from one of the following four starter kits.
Hypermedia
For server-rendered applications using Edge templates .
- Session-based authentication with CSRF protection
- Configured Vite pipeline for CSS and client-side JavaScript
- Edge templating with layouts and partials
API
For backend services that serve JSON.
- Token-based authentication with CORS configuration
- Structured controller and transformer setup
- Ready for your first endpoint out of the box
React
For single-page applications powered by React and Inertia.
- Session-based authentication with server-side rendering
- Type-safe
<Link>and<Form>components - Configured Vite pipeline with HMR
Vue
For single-page applications powered by Vue and Inertia.
- Session-based authentication with server-side rendering
- Type-safe
<Link>and<Form>components - Configured Vite pipeline with HMR
Barrel file generation
In v6, you imported controllers directly in your routes file. Every import had to be dynamic (a function that returns an import() call) for HMR to work. If you forgot to make an import dynamic, HMR silently stopped working for that controller.
const PostsController = () => import('#controllers/posts_controller')
const CommentsController = () => import('#controllers/comments_controller')
const UsersController = () => import('#controllers/users_controller')
router.get('/posts', [PostsController, 'index'])
router.get('/comments', [CommentsController, 'index'])
router.get('/users', [UsersController, 'index'])
As your application grows, the top of your routes file becomes a wall of lazy imports. Every new controller means another line of boilerplate.
v7 generates a barrel file at .adonisjs/server/controllers.ts that exports all your controllers. You import once from this barrel file and reference controllers by name. The generated barrel file handles lazy loading internally, so HMR works automatically.
import { controllers } from '#generated/controllers'
router.get('/posts', [controllers.Posts, 'index'])
router.get('/comments', [controllers.Comments, 'index'])
router.get('/users', [controllers.Users, 'index'])
You never maintain the barrel file. It is kept up to date by the dev server.
The same approach applies to events and policies. Each gets its own generated barrel file.
Auto-naming of routes
In v6, you named routes manually by chaining .as() on every route definition.
router.get('/posts', [PostsController, 'index']).as('posts.index')
router.get('/posts/:id', [PostsController, 'show']).as('posts.show')
router.post('/posts', [PostsController, 'store']).as('posts.store')
In v7, routes that reference controllers are automatically named using the controller.method pattern.
import { controllers } from '#adonisjs/server/controllers'
router.get('/posts', [controllers.Posts, 'index']) // → posts.index
router.get('/posts/:id', [controllers.Posts, 'show']) // → posts.show
router.post('/posts', [controllers.Posts, 'store']) // → posts.store
No .as() calls needed. The names are consistent, predictable, and always in sync with your controllers. This matters because route names are the foundation for what comes next.
End-to-end type safety
The headline feature of v7 is type safety that spans from your route definitions to your rendered frontend components. We use codegen to scan your application, extract types from your source code, and feed those types back into the framework. The result is that typos, missing parameters, and mismatched props become compile-time errors instead of runtime surprises.
This section walks through the entire type system, piece by piece.
Type-safe URL builder
In v6, the router.makeUrl method let you build URLs for named routes, but it was not type-safe. A typo in the route name or a missing parameter was only caught at runtime. The method still exists in v7 but is deprecated in favor of the new urlFor helper.
v7 scans your registered routes and auto-generates TypeScript types that the new urlFor function consumes.
declare module '@adonisjs/core/types/http' {
export interface RoutesList {
'posts.index': { params: {} }
'posts.show': { params: { id: string | number } }
'posts.store': { params: {} }
}
}
Now when you build URLs, you get autocomplete on route names and type-checked parameters.
import { urlFor } from '@adonisjs/core/services/url_builder'
export default class CommentsController {
async store({ request, response }: HttpContext) {
const comment = await Comment.create(request.body())
// ✅ Autocomplete on route names, type-checked params
const url = urlFor('posts.show', { id: comment.postId })
// ❌ TypeScript error: 'posts.shwo' does not exist
const url = urlFor('posts.shwo', { id: comment.postId })
// ❌ TypeScript error: missing required param 'id'
const url = urlFor('posts.show', {})
}
}
If you use Edge templates, the route helper has been replaced with a urlFor helper that has an identical API to the TypeScript version.
The URL builder also ships as a client module via the ~/client import. Your frontend code gets the same type-safe URL construction. We will come back to this when we cover the <Link> and <Form> components.
Transformers
Your routes are type-safe and your URLs are type-safe. The next question is: what about the data you send in responses?
In v6, when you returned a Lucid model from a controller, the framework serialized it implicitly. The serialization behavior was configured on the model itself through several serialization options, and the serialized output had no TypeScript type. You could rename a column, forget to exclude a sensitive field, or change a relationship, and nothing caught it at compile time. Worse, the serialization logic lived on the model, mixing persistence concerns with presentation concerns.
v7 introduces transformers as a dedicated serialization layer. A transformer extends BaseTransformer and implements a toObject method that defines the shape of the serialized output.
import { BaseTransformer } from '@adonisjs/core/transformers'
import type Post from '#models/post'
export default class PostTransformer extends BaseTransformer<Post> {
toObject() {
return {
id: this.resource.id,
title: this.resource.title,
excerpt: this.resource.body.substring(0, 200),
createdAt: this.resource.createdAt.toISO(),
}
}
}
BaseTransformer provides a static transform method that you call from your controllers. The serialize helper (available on HttpContext) wraps the result and returns a proper HTTP response.
import PostTransformer from '#transformers/post_transformer'
export default class PostsController {
async show({ params, serialize }: HttpContext) {
const post = await Post.findOrFail(params.id)
return serialize({
post: PostTransformer.transform(post),
})
}
}
Relationships
Transformers handle relationships by calling other transformers. This keeps the serialization of related data explicit and typed.
import { BaseTransformer } from '@adonisjs/core/transformers'
import type Post from '#models/post'
import UserTransformer from '#transformers/user_transformer'
export default class PostTransformer extends BaseTransformer<Post> {
toObject() {
return {
...this.pick(this.resource, [
'id',
'title',
'content',
'createdAt',
'updatedAt'
]),
author: UserTransformer.transform(this.resource.author)
}
}
}
By default, transformers serialize relationships up to one level deep. This prevents accidentally over-fetching nested data that your frontend does not need. For example, if a User has Posts and each Post has Comments, only the first level (User → Posts) is serialized by default. You can control this depth using the .depth() method.
toObject() {
return {
...this.pick(this.resource, ['id', 'fullName', 'email']),
posts: PostTransformer
.transform(this.resource.posts)
.depth(2) // Serializes user → posts → comments
}
}
Dependency injection
Transformer methods can inject dependencies using the @inject decorator. This is useful when a transformation needs access to the request context, for example to compute authorization flags based on the current user.
import type Post from '#models/post'
import { inject } from '@adonisjs/core'
import { HttpContext } from '@adonisjs/core/http'
import UserTransformer from '#transformers/user_transformer'
import { BaseTransformer } from '@adonisjs/core/transformers'
export default class PostTransformer extends BaseTransformer<Post> {
toObject() {
return {
...this.pick(this.resource, [
'id',
'title',
'createdAt',
'updatedAt'
]),
author: UserTransformer.transform(this.resource.author)
}
}
/**
* Detailed variant with authorization checks
*/
@inject()
async forDetailedView({ auth }: HttpContext) {
return {
...this.toObject(),
content: await markdownToHtml(this.resource.content),
can: {
view: true,
edit: auth.user?.id === this.resource.userId,
delete: auth.user?.id === this.resource.userId
}
}
}
}
Transformers → Data types
Transformers act as the single source of truth for the data leaving your server. Every response shape is defined inside a transformer's toObject method. The logical next step is to make those shapes available as TypeScript types on the client, so you never have to define the same data types manually in your frontend code.
The framework scans all your transformers at build time and generates a .d.ts file from the return values of their toObject methods. Inertia apps and separate frontend/backend projects inside a monorepo can import and reference these types directly.
// Auto-generated. Do not edit.
import type { InferData, InferVariants } from '@adonisjs/core/types/transformers'
import type SubscriberTransformer from '#transformers/subscriber_transformer'
import type UserTransformer from '#transformers/user_transformer'
export namespace Data {
export type Subscriber = InferData<SubscriberTransformer>
export namespace Subscriber {
export type Variants = InferVariants<SubscriberTransformer>
}
export type User = InferData<UserTransformer>
export namespace User {
export type Variants = InferVariants<UserTransformer>
}
}
You never write these interfaces by hand. Add a field to toObject and the type updates. Remove a field and every consumer that references it shows a compile error. One source of truth, shared across backend and frontend.
This is why transformers exist as a separate layer. Models are for persistence. Transformers are for presentation. By giving serialization its own dedicated place, the shape of your API responses becomes a typed contract that flows to your frontend automatically.
Type-safe inertia.render
In v6, inertia.render had no type checking. It accepted any page name as a string and any props object. If the page did not exist, you found out when the browser showed a blank screen. If the props were wrong, you found out when the component crashed.
Components are functions. Functions define the arguments they accept. It is the job of the caller to satisfy those arguments. Your Inertia pages are no different. A page component defines its props, and inertia.render is the caller. It should satisfy those props correctly.
v7 uses codegen to enforce this. The framework scans your Inertia page components at build time, extracts their prop types, and generates a type map that connects page names to their expected props.
Say you have this React page:
interface Props {
post: {
id: number
title: string
excerpt: string
}
comments: {
id: number
body: string
}[]
}
export default function Show({ post, comments }: Props) {
return (
<article>
<h1>{post.title}</h1>
{comments.map((c) => <Comment key={c.id} comment={c} />)}
</article>
)
}
The framework extracts the Props interface from this component and generates a type map.
// Auto-generated. Do not edit.
interface InertiaPages {
'posts/show': {
post: { id: number; title: string; excerpt: string }
comments: { id: number; body: string }[]
}
}
Now in your controller, inertia.render knows exactly what posts/show expects. Transformers fit naturally here. Because inertia.render can infer the return types of a transformer, it checks whether the transformed output satisfies the component's props. As long as the transformer's output matches the expected prop shape, you can pass it directly.
import type { HttpContext } from '@adonisjs/core/http'
import PostTransformer from '#transformers/post_transformer'
import CommentTransformer from '#transformers/comment_transformer'
export default class PostsController {
async show({ inertia, params }: HttpContext) {
const post = await Post.findOrFail(params.id)
const comments = await post.related('comments').query()
return inertia.render('posts/show', {
post: PostTransformer.transform(post),
comments: CommentTransformer.transform(comments),
})
}
}
If the props don't match, TypeScript catches it.
// ❌ TypeScript error: 'titel' does not exist
return inertia.render('posts/show', {
post: { titel: post.title },
})
// ❌ TypeScript error: property 'comments' is missing
return inertia.render('posts/show', {
post: PostTransformer.transform(post),
})
// ❌ TypeScript error: 'posts/shwo' is not a valid page
return inertia.render('posts/shwo', { ... })
The type safety extends to Inertia's data-loading helpers. You cannot use inertia.defer() on a required prop, because the prop would be undefined on the first render.
return inertia.render('posts/show', {
// 'post' is required in the component, pass it directly
post: PostTransformer.transform(post),
// 'comments' is optional in the component, so defer works
comments: inertia.defer(() => loadComments(post.id)),
})
inertia.lazy() works the same way. Required props cannot be lazy. inertia.merge() is available for deep-merging paginated data.
Type-safe <Link> and <Form> components
Earlier we covered the type-safe URL builder on the backend. <Link> and <Form> bring the same idea to the frontend.
Instead of manually constructing URLs or using string-based href values, you pass a route name and parameters to these components. They compute the URL for you using the same generated route types as urlFor, available on the client via the ~/client import.
import { Link } from '@adonisjs/inertia/react'
// ✅ Route exists, params are correct
<Link route="posts.show" routeParams={{ id: 1 }}>View post</Link>
// ❌ TypeScript error: 'posts.shwo' is not a valid route
<Link route="posts.shwo" routeParams={{ id: 1 }}>View post</Link>
// ❌ TypeScript error: missing required param 'id'
<Link route="posts.show" routeParams={{}}>View post</Link>
The <Form> component works the same way. It builds the action URL from the route name and validates parameters at compile time.
import { Form } from '@adonisjs/inertia/react'
<Form route="posts.store">
<input name="title" />
<button type="submit">Create post</button>
</Form>
Shared data moves to middleware
One more Inertia change worth calling out. In v6, shared data (the data available to every page, like the authenticated user or flash messages) was defined in the Inertia config file. This meant importing models, services, and transformers into a config file. Config files should be static declarations, not application logic.
v7 moves shared data to a middleware, giving you access to the full request lifecycle.
import type { HttpContext } from '@adonisjs/core/http'
import type { NextFn } from '@adonisjs/core/types/http'
import BaseInertiaMiddleware from '@adonisjs/inertia/inertia_middleware'
export default class InertiaMiddleware extends BaseInertiaMiddleware {
async share(ctx: HttpContext) {
const { session, auth } = ctx as Partial<HttpContext>
return {
errors: ctx.inertia.always(this.getValidationErrors(ctx)),
flash: ctx.inertia.always({
error: session?.flashMessages.get('error'),
success: session?.flashMessages.get('success'),
}),
user: ctx.inertia.always(
auth?.user ? UserTransformer.transform(auth.user) : undefined
),
}
}
}
Other Inertia improvements
resolvePageComponentsupports layouts, so you can define layout hierarchies in your page resolver- Deep-merging props instead of shallow-merging
inertia.page,inertia.ssrEnabled,inertia.getVersion, andinertia.requestInfohelpers- Tuyau adapter for React (more on Tuyau in the API client section below)
- Session data is reflashed on full reload due to asset version mismatch
Type-safe API client
Everything above covers Inertia applications where the backend and frontend live in the same project. But what if you are building an API consumed by a separate frontend application using TanStack, Nuxt, or any other framework?
v7 includes a type-safe API client powered by Tuyau
, built on top of fetch. It gives you a fully typed interface for sending requests to your AdonisJS API. Route names, parameters, request bodies, and response shapes are all type-checked. No manual type duplication between your API and your frontend.
The client is powered by a generated registry that your backend exports. Your frontend imports this registry and uses it to create a typed client.
import { createTuyau } from '@tuyau/core/client'
import { registry } from '@acme/backend/registry'
export const client = createTuyau({
baseUrl: import.meta.env.VITE_API_URL || 'http://localhost:3333',
registry,
headers: { Accept: 'application/json' },
credentials: 'include',
})
You can use the client directly to make typed fetch calls.
// GET /posts
const posts = await client.api.posts.index()
// GET /posts/:id
const post = await client.api.posts.show({
params: { id: 1 }
})
// POST /posts
const newPost = await client.api.posts.store({
body: { title: 'Hello' }
})
For projects using TanStack Query, you can create a query client that integrates directly with your hooks.
import { createTuyauReactQueryClient } from '@tuyau/react-query'
export const api = createTuyauReactQueryClient({ client })
import { useQuery } from '@tanstack/react-query'
import { api } from '~/client'
export function useGetPosts() {
return useQuery(api.posts.index.queryOptions())
}
export function useGetPost(id: string) {
return useQuery(api.posts.show.queryOptions({ params: { id } }))
}
The response types come from your transformers. When PostsController.show returns data through PostTransformer, the API client knows the exact shape of the response. Rename a field in your transformer, and the frontend shows a compile error.
The full picture
v7 has four type systems, each powered by its own codegen source:
- Route types are generated from your route definitions. They power
urlForon the backend and the<Link>and<Form>components on the frontend. - Data types are generated from your transformer
toObjectmethods. They give your API responses a typed contract that flows to your frontend code. - Inertia page types are generated by scanning your frontend page components. They power
inertia.renderand ensure your controller sends the right props to the right page. - API client types are generated from your routes and transformers into a registry. They power the Tuyau client for separate frontend applications.
Each system has its own source of truth. Together, they give you type safety across the entire application, whether you use Inertia or a separate frontend, without a single manually written type definition.
Every AdonisJS user has told us the same thing, get better at marketing, the framework deserves more visibility. We agree. If v7 excites you, share it.
Developer experience
Beyond the type system, v7 ships a collection of improvements that reduce friction in day-to-day development.
Env modifiers
v7 adds the file: modifier for environment variables.
Secrets managed by tools like Docker Secrets or HashiCorp Vault are typically written to a file on disk rather than injected as environment variables. The file: modifier reads the contents of a file and uses that as the value. You point the variable at the file path, and AdonisJS reads it for you.
# Reads the contents of /run/secrets/gcs_key and uses it as the value
GCS_KEY=file:/run/secrets/gcs_key
Secret env type
Environment variables containing sensitive data (API keys, tokens, encryption keys) should never appear in logs or serialized output. A single console.log(env) during debugging can leak secrets to your log aggregator.
v7 adds a schema.secret() type that wraps the value in a Secret class. The Secret class overrides toString() and toJSON() to return [redacted], so the value cannot be logged or serialized accidentally.
import Env from '#start/env'
export default await Env.create(new URL('../', import.meta.url), {
APP_KEY: Env.schema.secret(),
STRIPE_SECRET: Env.schema.secret(),
})
import env from '#start/env'
// The actual value, for use in your application
env.get('APP_KEY').release()
// Logging is safe. Outputs: [redacted]
console.log(env.get('APP_KEY'))
Multi-limiter
Rate limiting in v6 worked with a single limiter per check. If you wanted to limit login attempts by both IP address and email (a common pattern to prevent both brute-force attacks and credential stuffing), you had to orchestrate multiple limiters manually.
v7 adds limiter.multi() to act on multiple rate limiters in a single call.
import limiter from '@adonisjs/limiter/services/main'
export default class SessionController {
async store({ request }: HttpContext) {
const payload = request.only(['email', 'password'])
const loginLimiter = limiter.multi([
// 10 requests per minute per IP address
{ duration: '1 min', requests: 10, key: `login_${request.ip()}` },
// 5 requests per minute per IP + email, with a 20 minute block
{ duration: '1 min', requests: 5, blockDuration: '20 mins', key: `login_${request.ip()}_${payload.email}` },
])
await loginLimiter.penalize(() => {
return User.verifyCredentials(payload.email, payload.password)
})
}
}
If the credentials are wrong, both limiters are penalized. If either limit is exceeded, the request is blocked. penalize handles the coordination for you.
Smart migrations
The make:migration command now auto-detects your intent from the name you give it.
node ace make:migration create_posts
# Scaffolds: this.schema.createTable('posts', (table) => { ... })
node ace make:migration add_status_to_posts
# Scaffolds: this.schema.alterTable('posts', (table) => { ... })
Names starting with create_ scaffold a createTable call. Names starting with add_, alter_, or modify_ scaffold an alterTable call. You can still override the scaffolded code, but the detection gets it right.
Simplified auth setup
The withAuthFinder mixin in v6 required a callback that selected a hash driver, explicit uid configuration, and a password column name.
import hash from '@adonisjs/core/services/hash'
import { compose } from '@adonisjs/core/helpers'
import { BaseModel } from '@adonisjs/lucid/orm'
import { withAuthFinder } from '@adonisjs/auth/mixins/lucid'
const AuthFinder = withAuthFinder(() => hash.use('scrypt'), {
uids: ['email'],
passwordColumnName: 'password',
})
export default class User extends compose(BaseModel, AuthFinder) {}
In v7, you pass the hash service directly. The mixin uses email as the uid and password as the column name by default, since these are the correct values in the vast majority of applications.
import hash from '@adonisjs/core/services/hash'
import { compose } from '@adonisjs/core/helpers'
import { BaseModel } from '@adonisjs/lucid/orm'
import { withAuthFinder } from '@adonisjs/auth/mixins/lucid'
export default class User extends compose(
BaseModel,
withAuthFinder(hash),
) {}
v7 also adds user.validatePassword(password) that throws on failure, TokensProvider.deleteAll() for bulk token cleanup, and auth.checkUsing() for checking multiple guards in a single call.
Vite HMR port auto-assignment
If you work on multiple AdonisJS projects simultaneously, you have run into Vite HMR port conflicts. In v6, the second project would fail to start its HMR server because the default port was already taken, and you had to manually configure different ports.
v7 automatically assigns a random available port to the Vite HMR server when the default port is in use. No configuration needed.
Mail sender from environment variables
You can now configure the default mail sender name and address through environment variables instead of hardcoding them in the config file.
MAIL_FROM_ADDRESS=hello@myapp.com
MAIL_FROM_NAME=MyApp
This makes it easy to use different sender addresses per environment without touching application code.
Type helpers
We have exposed all the internal TypeScript helper utilities used across AdonisJS packages through a single module. These helpers are available for both your applications and third-party packages.
import type { InferRouteParams } from '@adonisjs/core/helpers/types'
InferRouteParams<'/users'> // {}
InferRouteParams<'/users/:id'> // { id: string }
InferRouteParams<'/users/:id?'> // { id?: string }
InferRouteParams<'/users/:id.json'> // { id: string }
InferRouteParams<'/users/*'> // { '*': string[] }
AI agent detection
AdonisJS now exposes runtime signals to detect when your application is being executed under the control of an AI coding agent.
import app from '@adonisjs/core/services/app'
app.detectedAIAgent()
// → 'claude' | 'gemini' | 'copilot' | ... | null
app.runningInAIAgent
// → boolean
We are already using this internally. For example, Japa's test runner switches to the dot reporter when it detects an AI agent, reducing the amount of output tokens the agent has to parse when determining whether a test passed or failed.
Synchronous logger for development
Pino, the logger used by AdonisJS, batches writes and flushes them asynchronously. This is the right choice for production performance, but during development it means logs can appear out of order or with a delay relative to the actions that triggered them.
v7 adds a syncDestination helper that writes logs immediately and synchronously. In development and testing, your logs appear right next to the request or action that produced them.
import env from '#start/env'
import app from '@adonisjs/core/services/app'
import { defineConfig, syncDestination, targets } from '@adonisjs/core/logger'
const loggerConfig = defineConfig({
default: 'app',
loggers: {
app: {
enabled: true,
name: env.get('APP_NAME'),
level: env.get('LOG_LEVEL'),
destination: !app.inProduction ? await syncDestination() : undefined,
transport: {
targets: [targets.file({ destination: 1 })],
},
},
},
})
In production, destination is undefined and Pino uses its default asynchronous behavior.
A new encryption module
The v6 encryption module was a single algorithm with a single key, both defined inside config/app.ts. There was no way to use multiple encryption algorithms, no way to rotate keys without re-encrypting all your data at once, and no deterministic encryption for querying encrypted database columns.
v7 replaces the encryption module with a complete rewrite built on @boringnode/encryption
. Encryption now has its own dedicated config file and supports multiple named drivers, each with its own algorithm and keys.
import env from '#start/env'
import { defineConfig, drivers } from '@adonisjs/core/encryption'
export default defineConfig({
default: 'aes256gcm',
list: {
aes256gcm: drivers.aes256gcm({
keys: [env.get('APP_KEY')],
}),
},
})
Key rotation
Each driver accepts an array of keys. The first key in the array is used for encryption. All remaining keys are used only for decryption. This lets you introduce a new key, start encrypting new data with it, and still decrypt old data encrypted with the previous key.
aes256gcm: drivers.aes256gcm({
keys: [
env.get('APP_KEY'), // new key: encrypts new data
env.get('OLD_APP_KEY'), // old key: decrypts existing data only
],
}),
Once you are confident all data has been re-encrypted with the new key, remove the old one from the array.
Deterministic encryption
Standard encryption produces different ciphertext every time you encrypt the same input. This is desirable for most use cases, but it makes equality checks impossible. You cannot query a database column for a specific encrypted value if the ciphertext is different every time.
v7 ships a deterministic encryption driver based on the AES-SIV algorithm. Deterministic encryption produces the same ciphertext for the same input and key, so you can perform equality comparisons between encrypted values.
import env from '#start/env'
import { defineConfig, drivers } from '@adonisjs/core/encryption'
export default defineConfig({
default: 'aes256gcm',
list: {
aes256gcm: drivers.aes256gcm({
keys: [env.get('APP_KEY')],
}),
aessiv: drivers.aessiv({
keys: [env.get('DETERMINISTIC_KEY')],
}),
},
})
import encryption from '@adonisjs/core/services/encryption'
// Encrypt with the deterministic driver
const encryptedEmail = encryption.use('aessiv').encrypt(email)
// Same input always produces the same ciphertext, so queries work
const user = await User.findBy('encrypted_email', encryptedEmail)
Migrating from v6
If you are upgrading from v6, set the legacy driver as your default. This driver uses the same algorithm as v6, so your application can continue reading existing encrypted data. You can then encrypt new data with aes256gcm and switch the default once the migration is complete.
import env from '#start/env'
import { defineConfig, drivers } from '@adonisjs/core/encryption'
export default defineConfig({
default: 'legacy',
list: {
legacy: drivers.legacy({
keys: [env.get('APP_KEY')],
}),
aes256gcm: drivers.aes256gcm({
keys: [env.get('APP_KEY')],
}),
},
})
Observability
We are shipping @adonisjs/otel, a new package that integrates AdonisJS with OpenTelemetry. You install it, configure your exporter, and get automatic tracing for your entire application. You see a visual timeline of every request showing middleware execution, database queries, and external API calls.
node ace add @adonisjs/otel
Traces are exportable to Jaeger, Grafana Tempo, Datadog, Honeycomb, or any other OpenTelemetry-compatible backend.
Currently, @adonisjs/otel relies on the OpenTelemetry SDK's monkey-patching behavior to instrument your application. This works, and it is how most OpenTelemetry integrations operate today. But we are not stopping there.
We have started adding diagnostic channels to the framework's core packages. Diagnostic channels are a Node.js native mechanism for publishing trace data. They have near-zero overhead when nothing is listening, and they do not require patching module internals. We have already added them to several packages: Ace (command execution), Fold (container resolution), HTTP server (middleware and serialization), Events (dispatching), Application (provider lifecycle), Health (check execution), Redis (commands), and Cache (operations).
As we expand coverage, @adonisjs/otel will shift from monkey-patching to consuming these diagnostic channels directly. The end result is an observability layer that is both more reliable (no patching surprises) and lower overhead. This migration is incremental. You install @adonisjs/otel today, and it gets better over time without changes on your end.
Lucid
Lucid is the data layer used by most AdonisJS applications. We have not introduced any breaking changes to it in v7, making upgrades smoother.
That said, we have shipped several improvements:
truncateAllTables()on the query client for cleaning up test databases in a single call.- Table-qualified column aliases for avoiding ambiguity in complex joins.
- UUID to string column mapping for working with UUID primary keys without manual casting.
- Optional column names in
uniqueandexistsvalidation rules, so the validator infers the column from the field name when they match.
The biggest improvement is the generation of schema classes, which addresses a long-standing complaint: you define your database schema in migrations, then re-type the same columns inside your models.
Schema classes
In v6, models had to redeclare every database column with @column() decorators. Even a simple model required repeating the column names, types, and options that your migrations already defined.
import { DateTime } from 'luxon'
import { BaseModel, column } from '@adonisjs/lucid/orm'
export default class Poll extends BaseModel {
@column({ isPrimary: true })
declare id: number
@column()
declare userId: number
@column()
declare title: string
@column()
declare pollColor: string
@column()
declare slug: string
@column.dateTime()
declare closesAt: DateTime
@column.dateTime({ autoCreate: true })
declare createdAt: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
declare updatedAt: DateTime
/**
* Find if the poll has expired or not
*/
get expired() {
return this.closesAt.diff(DateTime.local(), 'seconds').seconds <= 0
}
}
v7 follows a migrations-first approach. Your migrations remain the source of truth for database structure. After migrations run, Lucid scans the database and generates strongly typed schema classes. Your models extend these classes and inherit all column definitions automatically.
import { DateTime } from 'luxon'
import { PollSchema } from '#database/schema'
export default class Poll extends PollSchema {
/**
* Find if the poll has expired or not
*/
get expired() {
return this.closesAt.diff(DateTime.local(), 'seconds').seconds <= 0
}
}
Column definitions (id, title, slug, timestamps, and everything else) are inherited from the generated schema class. Your model focuses on relationships, hooks, and business logic.
This also helps projects with legacy or pre-existing databases. You can generate schema classes directly from the database without recreating migration history. Schema classes are regenerated automatically whenever migrations run, so your models stay in sync with the actual database structure.
Codemods
The assembler holds the dev tooling needed by AdonisJS apps and packages. It already ships codemods that modify your source files when a package runs node ace configure, handling tasks like adding imports, registering providers, and updating config files.
v7 adds the following new codemods:
import type { Codemods } from '@adonisjs/assembler'
export async function configure(codemods: Codemods) {
// Add a validator to an existing controller method
await codemods.addValidator('app/controllers/posts_controller.ts', {
method: 'store',
validator: 'createPostValidator',
importPath: '#validators/post',
})
// Add rate limiting to a route
await codemods.addLimiter('start/routes.ts', {
route: '/login',
limiter: 'loginLimiter',
})
// Scaffold a new method on an existing controller
await codemods.addControllerMethod('app/controllers/posts_controller.ts', {
name: 'archive',
})
// Apply a mixin to a model
await codemods.addModelMixin('app/models/user.ts', {
mixin: 'withAuthFinder',
importPath: '@adonisjs/auth/mixins/lucid',
})
}
These codemods are designed for package authors and tooling.
Edge Markdown
We are shipping edge-markdown, a new package that lets you render Markdown files to HTML directly inside your Edge templates using MDC (Markdown Components) syntax. MDC allows you to use Edge components within Markdown files, enabling a seamless mix of content and dynamic UI elements. This makes it a natural fit for documentation sites, blogs, and content-heavy applications that need the power of templating within Markdown.
@let(doc = await $markdown.render({
file: absolutePathToMdFile,
}))
<h1>{{{ doc.frontmatter.title }}}</h1>
<div>{{{ doc.content }}}</div>
<div>{{{ doc.toc }}}</div>
The $markdown.render method parses the Markdown file, extracts frontmatter, generates a table of contents, and renders MDC components through Edge. You get back the rendered HTML, the frontmatter object, and the table of contents, all from a single call.
Content collections
We have added @adonisjs/content, a lightweight content management layer for working with typed collections of static data. Think of it as a lightweight Astro content layer for AdonisJS. If you are building a docs site, a blog, or anything that needs structured content backed by typed data, this package keeps everything consistent without reaching for a full CMS.
We originally built it for our own documentation and blog needs (as well as for loading GitHub sponsors and release data), and it is now available for general use.
The package lets you define collections backed by VineJS schemas and load data from JSON files or GitHub. Each collection exposes a simple query interface, along with support for defining custom views so you can add your own filtering or grouping logic.
import vine from '@vinejs/vine'
import app from '@adonisjs/core/services/app'
import { Collection } from '@adonisjs/content'
import { loaders } from '@adonisjs/content/loaders'
export const blogPosts = Collection.create({
cache: app.inProduction,
loader: loaders.jsonLoader(app.makePath('content/blog_posts/db.json')),
schema: vine.array(
vine.object({
coverImage: vine.string().toVitePath(),
title: vine.string(),
slug: vine.slug(),
description: vine.string(),
authorName: vine.string(),
contentPath: vine.string().toAbsolutePath(),
})
),
views: {
findBySlug(data, slug: string) {
return data.find((post) => post.slug === slug)
},
},
})
You define the schema once and everything downstream is typed. The views object lets you add custom query methods (like findBySlug above) that are fully type-safe and appear on the collection's query interface.
The package ships with built-in loaders for GitHub sponsors, releases, and contributors. JSON loaders support caching and configurable refresh schedules. Vite integration is included for resolving asset paths inside your content.
New error page
Youch is the error page you see in the browser during development whenever an error occurs. It was initially written in 2016 and had started to show its age.
We rewrote Youch from the ground up. The on-disk size dropped from 1.3 MB to 700 KB. The new version adds support for error causes, ensures proper compatibility with ESM modules, and ships with a refreshed UI design.
Youch is a standalone package used beyond AdonisJS. Nuxt and Cloudflare both use it for their development error pages, so this rewrite benefits a wider ecosystem.
Platform-native Response support
In v6, returning a platform-native Response instance from a route handler resulted in an error. You had to manually convert or pipe the response yourself.
v7 handles native Response instances automatically. You can return them directly from your route handlers and controller methods.
This enables seamless integration with third-party libraries that return native Response objects (like Vercel's AI SDK) without requiring manual conversion or wrapper code.
import { HttpContext } from '@adonisjs/core/http'
import { streamText } from 'ai'
export default class AiController {
async chat({ request }: HttpContext) {
const result = await streamText({
model: yourModel,
prompt: request.input('prompt'),
})
return result.toUIMessageStreamResponse()
}
}
Upgrading from v6
v7 has breaking changes, but most are mechanical: renaming imports, moving config, updating method calls. We have written a detailed upgrade guide that walks through every change with before/after code examples and a checklist.
The biggest migrations are the encryption module (if you have encrypted data) and Inertia shared data (if you use Inertia).
Security
We patched four CVEs during the v7 development cycle. Both are fixed in v7, and backported patches are available for v6.
Path traversal in file uploads (CVE-2026-21440)
MultipartFile.move() in v6 used the client-provided filename by default when writing the file to disk. A crafted filename containing path traversal characters could write files outside the intended upload directory.
// v6: the client sends a filename like "../../etc/cron.d/malicious"
// move() uses that filename as-is
await file.move('/uploads')
// → writes to /etc/cron.d/malicious ⚠️
In v7, move() renames uploaded files with a random UUID by default. The client-provided filename is ignored unless you explicitly opt in.
// v7: client filename is ignored, UUID is used instead
await file.move('/uploads')
// → writes to /uploads/550e8400-e29b-41d4-a716-446655440000.png ✓
Mass assignment in Lucid (CVE-2026-22814)
Lucid's fill, merge, and create methods in v6 accepted any key-value pairs, including keys that matched internal ORM properties like $isPersisted and $original. An attacker who controlled the request body could corrupt model state, causing the ORM to skip database writes, bypass hooks, or produce inconsistent data.
// v6: internal properties could be overwritten through user input
await User.create({
email: 'user@example.com',
$isPersisted: true, // ⚠️ corrupts internal state
$original: {}, // ⚠️ corrupts internal state
})
v7 silently ignores any keys that start with $ in fill, merge, and create. Internal ORM properties are protected regardless of what the request body contains.
// v7: internal properties are silently ignored
await User.create({
email: 'user@example.com',
$isPersisted: true, // silently ignored
$original: {}, // silently ignored
})
Prototype Pollution in AdonisJS multipart body parsing (CVE-2026-25754)
Hardened the internal storage used by the multipart parser to collect form fields, preventing potential misuse of specially crafted field names.
Due to insufficient validation of multipart field names, specially crafted fields containing reserved property names such as __proto__, constructor, or prototype could be assigned directly to objects created during parsing. This allows an attacker to pollute object prototypes, potentially affecting other parts of the application that rely on these objects.
The vulnerability is limited to multipart request parsing and does not affect JSON or URL-encoded body parsing.
Denial of Service via unbounded memory usage in AdonisJS multipart parsing (CVE-2026-25762)
Fixed an issue where the internal buffer used for file type detection could grow indefinitely when processing files whose content didn't match any known magic number signature. The parser now falls back to filename-based detection after a reasonable threshold, ensuring predictable memory usage during file uploads.
Thank you
To the core team: six months of work and 200+ pre-releases. Thank you.
To the Insiders: your financial support keeps AdonisJS alive and independent. Open source does not sustain itself, and your contributions make this possible.
To everyone who filed issues, tested pre-releases, shared feedback on Discord, wrote blog posts, and built applications with AdonisJS: you are the reason we keep going.
What's next
With the infrastructure now in place (tracing channels, the type system, the new assembler), we are already working on what comes next: queues, workflows, scheduler, and an AI SDK.
For now, go build something with v7.