Audit logs are the record of who did what and when in your app. They sound boring until the day a
B2B customer asks "who deleted my project?" or a security researcher spots suspicious access.
Without an audit log: you have no answer. With one: you do.
This guide explains how to implement audit logs in a Next.js app with Prisma in a way that's useful
in production and doesn't become a runaway table.
## What to log
Don't log everything. That floods the DB and tanks performance. Log:
- Critical state changes: account creation/deletion, plan changes, ownership transfer
- Admin actions: ban/unban, impersonate, role changes
- Operations on sensitive data: exports, reads of personal information, password changes
- Login (success and failure) and logout
- Billing changes
Do NOT log:
- Every list GET
- Routine reads that affect no one
- Cosmetic changes without impact
## Step 1: schema
```prisma
model AuditLog {
id String @id @default(cuid())
createdAt DateTime @default(now())
userId String?
organizationId String?
action String // e.g., 'user.banned', 'project.deleted'
resource String? // e.g., 'project:abc-123'
metadata Json? // additional data
ip String?
userAgent String?
user User? @relation(fields: [userId], references: [id], onDelete: SetNull)
organization Organization? @relation(fields: [organizationId], references: [id], onDelete: SetNull)
@@index([organizationId, createdAt])
@@index([userId, createdAt])
@@index([action, createdAt])
}
```
`onDelete: SetNull` is important: if the user is deleted, the log stays. Audit logs must outlive the
actors they record.
## Step 2: centralized helper
`src/lib/audit/log.ts`:
```ts
import { db } from '@/lib/db/client';
import { headers } from 'next/headers';
export async function logAudit(params: {
userId?: string;
organizationId?: string;
action: string;
resource?: string;
metadata?: Record
;
}) {
const h = await headers();
const ip = h.get('x-forwarded-for') ?? h.get('x-real-ip') ?? null;
const userAgent = h.get('user-agent') ?? null;
await db.auditLog.create({
data: {
...params,
metadata: params.metadata as never,
ip,
userAgent,
},
});
}
```
And use it:
```ts
await logAudit({
userId: session.user.id,
organizationId: orgId,
action: 'project.deleted',
resource: `project:${projectId}`,
metadata: { projectName: project.name },
});
```
## Step 3: action naming convention
Always use `resource.verb`:
- `user.created`, `user.banned`, `user.deleted`
- `project.created`, `project.deleted`, `project.transferred`
- `subscription.created`, `subscription.canceled`
- `auth.login.success`, `auth.login.failed`
Consistency from day 1. If each dev names their own way, in 6 months you can't find anything.
## Step 4: don't block the operation
The audit log must NOT block the main operation. If the log fails, the operation continues.
Wrong:
```ts
await db.project.delete({ where: { id } });
await logAudit({ action: 'project.deleted', resource: `project:${id}` });
```
If `logAudit` fails, it throws and the response to the client is error → but the project is ALREADY
deleted. Confused customer.
Right:
```ts
await db.project.delete({ where: { id } });
logAudit({ action: 'project.deleted', resource: `project:${id}` }).catch(console.error);
```
Or better: an async queue (Inngest, BullMQ) for audit logs.
## Step 5: admin view
```tsx
'use client';
export function AuditLogTable({ logs }: { logs: AuditLog[] }) {
return (
| Date |
User |
Action |
Resource |
IP |
{logs.map((log) => (
| {log.createdAt.toLocaleString()} |
{log.user?.email ?? 'system'} |
{log.action}
|
{log.resource} |
{log.ip} |
))}
);
}
```
With filters by user, date, action, and org.
## Step 6: retention
Logs grow. Decide how long you keep them:
- **30 days**: typical for apps with low compliance needs
- **1 year**: B2B with customers asking
- **7 years**: financial or healthcare regulation
Scheduled job that deletes old logs:
```ts
await db.auditLog.deleteMany({
where: { createdAt: { lt: subDays(new Date(), 365) } },
});
```
If you must keep everything for compliance, export to S3 before deleting.
## Common errors
**1. Logging passwords or tokens**: never put sensitive fields in `metadata`. Filter before logging.
**2. Table without indexes**: in production with millions of rows, a query without an index takes
minutes. Index by `organizationId, createdAt`.
**3. Logging from noisy middlewares**: if you log every request, in 1 month you have a table of
millions of useless rows. Be selective.
## Bottom line
Audit logs are 1-2 hours of implementation if you do it from the start, or a huge pain if you add
them when there's already production data. Worth doing on day 1.
And when a customer asks "who did X?", you have an answer. That, in B2B, is gold.Enjoyed this article?
Subscribe for more tutorials and tips on building products with AI