guide
Next.js and Remix File Uploads without Vendor Lock-in (Pushduck Guide)
Ship type-safe S3 uploads with Pushduck across Next.js, Remix, and Express while keeping ownership of your infrastructure.
Hunchbite Team
November 11, 2025
12 min read
pushduckfile uploadnextjsremixaws s3cloudflare r2developer experience
Next.js and Remix File Uploads without Vendor Lock-in (Pushduck Guide)
Why Pushduck Is Worth Evaluating
- Type safety end-to-end. Both server configs and client hooks are generated from the same schema, reducing runtime upload errors.
- Lightweight bundle (~6KB). Pushduck avoids AWS SDK bloat, keeping client bundles lean. (pushduck.dev)
- Runs everywhere. Supports Next.js, Remix, Express, Fastify, SvelteKit, Astro, and edge runtimes out of the box.
- Own your infrastructure. Works with any S3-compatible storage (AWS S3, Cloudflare R2, DigitalOcean Spaces, MinIO) with no SaaS lock-in or per-upload fees.
- Built-in validation & security. Enforce MIME types, size limits, or custom rules before an upload starts. Pushduck also signs requests server-side, so credentials stay private.
- Observability ready. Client hooks expose progress, ETA, and error states that you can feed into logging/observability tooling.
Architecture Overview
Browser/Client ──▶ Pushduck client (generated hooks)
│
▼
/api/upload (Next.js/Remix route)
│ ╲ uses unified router + validators
▼
S3-compatible storage (R2, S3, etc.)
- Server: Define storage provider credentials once with
createUploadConfig(), then expose handlers through your framework’s routing system. - Client: Generate a typed upload client with
createUploadClient()or theuseUpload()hook. The client knows every upload “route” you defined on the server.
Prerequisites
- Node.js 18+
- Target framework (examples below use Next.js App Router and Remix)
- Access keys for your preferred S3-compatible storage
- Optional: UI library for drag-and-drop, toast notifications, or progress meters
Environment Setup
npm install pushduck
# or pnpm add pushduck
Configure your environment variables (.env.local for Next.js, .env for Remix):
# Cloudflare R2 example
CLOUDFLARE_R2_ACCESS_KEY_ID=your_access_key
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your_secret_key
CLOUDFLARE_R2_ACCOUNT_ID=your_account_id
CLOUDFLARE_R2_BUCKET_NAME=your_bucket_name
For AWS S3 switch to AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, and AWS_BUCKET.
Quick Start: Next.js App Router (pushduck nextjs upload)
- Create the server config at
app/api/upload/route.ts:
import { createUploadConfig } from "pushduck/server";
const { s3 } = createUploadConfig()
.provider("cloudflareR2", {
accessKeyId: process.env.CLOUDFLARE_R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.CLOUDFLARE_R2_SECRET_ACCESS_KEY!,
region: "auto",
endpoint: `https://${process.env.CLOUDFLARE_R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
bucket: process.env.CLOUDFLARE_R2_BUCKET_NAME!,
accountId: process.env.CLOUDFLARE_R2_ACCOUNT_ID!,
})
.build();
const router = s3.createRouter({
imageUpload: s3
.image()
.maxFileSize("5MB")
.mimeTypes(["image/jpeg", "image/png"]),
documentUpload: s3.file().maxFileSize("10MB"),
});
export const { GET, POST } = router.handlers;
export type AppRouter = typeof router;
- Generate a typed client in
lib/upload-client.ts:
import { createUploadClient } from "pushduck";
import type { AppRouter } from "@/app/api/upload/route";
export const upload = createUploadClient<AppRouter>({
baseUrl: "/api/upload",
});
export const { imageUpload, documentUpload } = upload;
- Build a UI hook in
components/ImageUploader.tsx:
"use client";
import { imageUpload } from "@/lib/upload-client";
import { useState } from "react";
export function ImageUploader() {
const { uploadFiles, uploadedFiles, progress, isUploading, error } =
imageUpload({
onSuccess: (results) => console.log("Uploaded:", results),
});
const [selected, setSelected] = useState<File[]>([]);
async function handleUpload(files: File[]) {
setSelected(files);
await uploadFiles(files, {
metadata: { folder: "marketing-assets" },
});
}
return (
<section className="space-y-4">
<input
type="file"
accept="image/*"
multiple
onChange={(event) =>
handleUpload(Array.from(event.target.files ?? []))
}
disabled={isUploading}
/>
{progress && (
<progress max={100} value={progress.percentage}>
{progress.percentage}%
</progress>
)}
{error && <p className="text-red-600">Upload failed: {error.message}</p>}
<ul className="space-y-2">
{uploadedFiles.map((file) => (
<li key={file.key}>
<a href={file.url} target="_blank" rel="noreferrer">
{file.name}
</a>
</li>
))}
</ul>
</section>
);
}
- Surface the uploader in
app/page.tsxor any route:
import { ImageUploader } from "@/components/ImageUploader";
export default function HomePage() {
return (
<main className="mx-auto max-w-3xl space-y-6 py-10">
<h1 className="text-3xl font-semibold">
Type-safe S3 uploads with Pushduck + Next.js
</h1>
<p className="text-muted-foreground">
Upload images, track progress, and keep full control over your storage
layer.
</p>
<ImageUploader />
</main>
);
}
Remix & Express Notes (remix s3 direct upload tutorial)
Remix loader/action
// app/routes/api.upload.tsx
import { createUploadHandler } from "pushduck/server";
export const { action, loader, type AppRouter } = createUploadHandler()
.provider("aws", {
region: process.env.AWS_REGION!,
bucket: process.env.AWS_BUCKET!,
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
})
.routes({
avatarUpload: (s3) => s3.image().maxFileSize("2MB"),
});
Express middleware
// server/uploads.ts
import express from "express";
import { createUploadConfig } from "pushduck/server";
const { s3 } = createUploadConfig()
.provider("minio", {
accessKeyId: process.env.MINIO_KEY!,
secretAccessKey: process.env.MINIO_SECRET!,
bucket: process.env.MINIO_BUCKET!,
endpoint: process.env.MINIO_ENDPOINT!,
region: "us-east-1",
})
.build();
const router = s3.createRouter({
mediaUpload: s3.file().maxFileSize("100MB"),
});
export const uploadRouter = express
.Router()
.get("/upload", router.handlers.GET)
.post("/upload", router.handlers.POST);
These snippets address long-tail searches like “Pushduck Express example” and “Remix S3 upload without vendor lock-in.”
Advanced Patterns (file upload progress hook react)
- Track progress & ETA: Use the
progressobject returned fromuseUpload()to feed charts, logs, or analytics. - Custom metadata: Pass a second argument to
uploadFiles(files, { metadata: { userId } })and mirror metadata in your storage naming conventions. - Signed URLs for downloads: Generate time-bound URLs with your storage provider and pair them with Pushduck’s uploaded file descriptors.
- Background processing: Trigger webhooks or queue jobs in your
onSuccesscallback for image optimization or virus scanning. - Role-based access: Wrap
GET/POSThandlers with your auth middleware so only authenticated users can create uploads.
Validation & Security (s3 upload validation library)
- File size control:
.maxFileSize()accepts human-readable strings (“5MB”, “1GB”). - MIME filters:
.mimeTypes(["image/jpeg", "image/png"])ensures only expected formats pass through. - Global limits: Add
maxFilesin the client hook to prevent accidental bulk uploads. - Audit logging: Pipe upload results into your logging stack (e.g., PostHog, Sentry breadcrumbs) along with user ID and request ID.
- Error surfacing: Return custom error messages from callbacks to improve UX and reduce support tickets.
Provider Matrix (cloudflare r2 file upload nextjs, type-safe s3 uploads typescript)
| Provider | Notes | Docs |
|---|---|---|
| Cloudflare R2 | Great for egress-free CDN usage and low-cost storage. Use region: "auto" and R2 endpoint. |
R2 setup |
| AWS S3 | Default choice with regional redundancy; ensure IAM roles are scoped to the specific bucket. | S3 security best practices |
| DigitalOcean Spaces | S3-compatible and priced simply; ideal for secondary regions. | Spaces docs |
| MinIO | Self-hosted option for regulated environments; keep endpoint private. | MinIO docs |
Pushduck vs. Alternatives (self-hosted uploadthing alternative)
| Feature | Pushduck | UploadThing | AWS SDK DIY |
|---|---|---|---|
| Infrastructure ownership | You control storage (S3, R2, etc.) | Managed and billed by vendor | Full control |
| Type safety | Strong (client/server generated) | Strong | Manual |
| Pricing | OSS (no per-upload fees) | SaaS pricing tiers | Pay cloud costs + dev time |
| Bundle size | ~6KB client | ~33KB client | Depends (often >200KB) |
| Framework support | 16+ frameworks | Focused on React/Next.js | Any, but DIY |
| Vendor lock-in | None | Medium | None |
| Upload progress hooks | Built-in | Built-in | Needs custom code |
FAQ & Search-Friendly Answers
- How do I migrate from UploadThing? Mirror your route IDs, generate a Pushduck client, and reuse existing storage buckets. Update client components to call the new hooks. Expect a 1–2 hour migration for standard apps.
- Can I run Pushduck on Vercel Edge Functions? Yes. Pushduck’s handlers are edge-compatible as long as your storage provider supports the region.
- Does Pushduck store my files? No. You point it at any S3-compatible bucket and keep full control of the data.
- How do I restrict uploads by role? Gate the
POSThandler with your auth middleware (e.g., Clerk, Auth.js) and reject requests before they reachcreateUploadConfig(). - What’s the best way to handle very large files (>500MB)? Use background workers with tus.io or chunked uploads. Pushduck’s roadmap mentions chunking support; check their GitHub issues for updates.
Related guides
guide
12 min read
Developer Experience Assessment Framework
A systematic approach to evaluating and improving your team's developer experience.
guide
8 min read
TypeScript Developer Experience Optimization
Learn how to optimize your TypeScript setup for better developer productivity and workflow efficiency.
guide
15 min read
Complete TypeScript Project Setup Guide
A step-by-step guide to setting up a modern TypeScript project with best practices for developer experience.