Frappe Framework: The Technical Autopsy of a Low-Code Dream

A technical autopsy of Frappe's architectural patterns — ORM string-stitching, permission sprawl, memory leaks, and no-rollback migrations — backed by development experience of more than 1 year and maintainer admissions.

frappe erpnext architecture developer-experience python opinion
26 min read 5,105 words
Frappe Framework: The Technical Autopsy of a Low-Code Dream

The Pitch That Hooks You

You’re evaluating frameworks for a business application. You need forms, workflows, role-based permissions, REST APIs, a database layer, background jobs, and an admin interface. Building all of that from scratch sounds like six months of work that you don’t have.

Then you find Frappe.

Full-stack. Low-code. Batteries included. Define a “DocType” in the UI, and the framework generates your database table, REST API, form view, list view, and permission model — automatically. Need an ERP? ERPNext sits on top and gives you accounting, inventory, HR, and CRM out of the box. The GitHub stars are respectable. The documentation looks comprehensive. The demo is slick.

You spin up a bench, create your first DocType, watch the form materialize out of thin air, and think: this is the future of rapid application development.

I thought the same thing. So did a lot of developers. What follows is what we found after the honeymoon ended — not opinion dressed as fact, but a catalog of architectural decisions that the framework’s own maintainers have acknowledged as problematic, backed by forum threads, blog posts, and GitHub issues from the community that lived through them.

This isn’t a hit piece. It’s a technical autopsy. And if you’re evaluating Frappe right now, you deserve to read it before you commit.

TL;DR

  • The pattern: Frappe was patched, not redesigned. Every layer has a workaround stapled to it, and every workaround has its own edge cases.
  • The cost: ORM string-stitching, five-layer permission sprawl, gc.freeze() + gthread memory mismatch, MariaDB lock-in, and migrations with no rollback.
  • The verdict: Frappe saves you time for the first two weeks. Then you spend that time back, with interest.

The Pattern Nobody Sees Until It’s Too Late

Step back and the pattern across every flaw is the same. The ORM was string concatenation, then patched with input sanitization, then patched with a query builder that doesn’t cover legacy code. The permission system grew to three layers, then five in Frappe Press, then a skip_roles() hack with a TODO comment to remove it. Memory management was patched with gc.freeze(), then patched again with max_jobs worker restarts, then patched again with fail_stuck_press_jobs. Migrations were patched with patches.txt. Schema diffing was patched with bench migrate. Permission scaling was patched with a “Resync” button.

Nothing in Frappe was redesigned. It was patched. Each patch addresses a specific failure mode without revisiting the assumption that produced it. The result is a framework where every layer has a workaround stapled to it, and every workaround has its own edge cases that someday will get their own workaround.

I say this with real frustration because Frappe works. At first. The DocType wizard is genuinely clever. The auto-generated APIs are convenient. In thirty minutes, you have something that looks like it would take weeks to build from scratch. I’ve seen developers — myself included — feel a genuine spark of excitement. That’s the trap. Because the framework optimizes for time-to-first-impression at the expense of time-to-production, and the debt compounds faster than you realize.

What follows is every layer where that pattern plays out. Not because I’m looking for problems. Because the problems keep finding me.

The ORM: Stitching Strings, Patching Holes

Every web framework lives or dies by its database abstraction. Frappe’s ORM was, for most of its history, built on string concatenation.

The framework’s own engineering blog puts it plainly: the central flaw was “validating and sanitising bad inputs instead of explicitly allowing known good inputs.” The internal implementation “used strings to stitch together the final query.” This isn’t a community complaint — it’s a confession from the maintainers.

The consequences were predictable. Multiple SQL injection vulnerabilities were discovered and patched reactively over the years. Each fix addressed a specific injection vector rather than solving the systemic problem. If your framework’s security model is “we’ll patch the holes as attackers find them,” you don’t have a security model — you have a prayer.

Frappe has a dedicated vendor page on CVEDetails with a documented track record of SQL injection CVEs. CVE-2025-56381, for which a public PoC exists, is a recent example of the class of issues that emerges when a framework’s ORM was historically built on string stitching.

Note

Version 16 update: Frappe has introduced a unified query builder based on pypika. It’s a genuine improvement. But it doesn’t retroactively fix the years of raw SQL already embedded in ERPNext and every custom Frappe app ever written.

The N+1 Problem Nobody Escapes

Like any ORM, Frappe suffers from the N+1 query problem. But unlike mature ORMs (SQLAlchemy, Django’s ORM, ActiveRecord) that provide eager loading, select_related, or prefetch_related out of the box, Frappe’s answer was… write a JOIN yourself.

The typical Frappe anti-pattern looks like this:

# Fetch all sales orders
orders = frappe.get_all("Sales Order", fields=["name", "customer"])

# Now fetch items for EACH order — one query per order
for order in orders:
    order["items"] = frappe.get_all(
        "Sales Order Item",
        filters={"parent": order["name"]},
        fields=["item_code", "qty", "rate"]
    )

With 500 orders, that’s 501 database queries. The “fix” is to write a raw SQL JOIN — which defeats the purpose of having an ORM.

The 97% Trap

Frappe’s maintainers estimate that 97% of data manipulation can be done through the ORM’s building blocks. That sounds great until you realize that the remaining 3% — the complex reports, the multi-table aggregations, the queries that actually matter in a business application — require you to “bypass the ORM layer and write raw SQL queries yourself.”

And when you bypass the ORM, you also bypass Frappe’s permission model. The engineering blog admits this means developers must “reimplement Frappe’s non-trivial permission model to avoid leaking unauthorised data.” So you’re writing raw SQL and reimplementing authorization. At that point, what is the framework giving you?

A community audit found approximately 694 uses of frappe.db.sql() across 263 files in the codebase — raw SQL scattered throughout what’s marketed as a “low-code” framework. That’s not an escape hatch being used in edge cases. That’s load-bearing raw SQL.

Child Tables and Data Modeling: Where Reality Doesn’t Fit

Frappe’s “child table” concept is central to how you model one-to-many relationships — line items in an invoice, rows in a material request, entries in a journal. On the surface, it’s elegant. In practice, it’s a minefield of artificial constraints.

You cannot nest a child table inside another child table. This has been requested since at least 2017, reopened multiple times, and remains unsupported. In the real world, data is hierarchical. A production order has operations; each operation has material inputs. Frappe says: flatten it or hack around it.

Child tables don’t have their own controllers. All validation logic for child rows must be crammed into the parent document’s controller. Validating stock availability for each line item in an order? You’re writing a loop inside the parent’s validate method, mixing parent-level and row-level concerns into a single, increasingly unmanageable function.

And here’s the one that catches people off guard: non-admin users cannot directly query child tables using Frappe’s ORM. Methods like frappe.get_all() and frappe.db.get_list() fail for non-privileged users trying to retrieve child table rows. The workaround is to write a whitelisted server method or — you guessed it — raw SQL.

The grid UI for child tables is limited to displaying 10 columns of data. If your line item has more than 10 fields that users need to see simultaneously, you’re out of luck.

These aren’t edge cases. One-to-many relationships with validation, permissions, and reasonable display requirements are the bread and butter of business applications — the exact domain Frappe claims to serve.

The same data modeling rot extends to the tabUser table itself. Frappe uses the user’s email address as the primary key — a choice from an era before UUIDs were standard. In modern systems, users are identified by an immutable UUID. This ensures that if a user changes their email, gets married and changes their name, or merges accounts, their core identity remains stable across all relational tables.

In Frappe, if dipankar@dipankar-das.com wants to update their email address to a corporate domain, the framework cannot just update an email column. Because the email is the primary key, it must execute a cascading rename operation across every single table in the database — audit trails, document owners, assigned tasks, comments, permissions. If this cascading operation fails halfway through (due to a lock, a timeout, or a custom table missing the cascade hook), the database is left in a corrupted state where half the system thinks the user is the old email, and half thinks it’s the new one.

It is a fundamental data modeling error baked into the bedrock of the framework. And it’s the kind of error that never shows up in a demo.

The Permission System That Fights Itself

Frappe’s permission model has three layers, each capable of overriding the one beneath it. On paper, that’s defense-in-depth. In practice, the surface area required to reason about access is enormous, and the layers actively undermine each other.

  1. tabDoc Perm (Role-Level Permissions): If a role is permitted, any user with that role can access any document in that DocType. No row-level restriction. No scope.

  2. tabUser Perm (User-Level Permissions): To mitigate the over-permissioning, Frappe added user-level restriction. But if the link field is simply not present on the document, the restriction silently fails open, resulting in massive data leaks.

  3. tabDoc Share (Permission Override on Doc Level): A user with share permission can share a document with anyone and grant them write access, regardless of what role-level or user-level permissions say. The docs put it plainly: it “bypasses every check mentioned above.”

Configuring access for one user means traversing all of these layers, plus Role Profiles, Permission Managers, Module Profiles, and Custom DocPerms — a parade of DocTypes the framework spawned to hold a permission model together.

The Industry Went One Way. Frappe Went the Other.

The industry has been moving toward least-privilege for a decade. Zero-trust, deny-by-default, explicit grants — the consensus is that systems should refuse access unless a rule grants it. Frappe runs the opposite playbook: absence of any role deactivates the permission barrier entirely. No roles assigned? You’re implicitly approved.

This default-permissive stance isn’t a minor quirk. It’s the philosophical opposite of how modern systems are designed. Zero-trust architectures operate on the principle that every access must be explicitly authorized — nothing is assumed safe. Frappe’s permission model operates on the reverse: access is assumed safe unless a role explicitly blocks it. The surface area for reasoning about who can see what grows combinatorially, because you’re not checking “does a rule grant access?” — you’re checking “have all the right rules been applied?” And missing one rule means data leaks.

When Frappe builds products on top of itself, the permission sprawl compounds. Frappe Press ended up with five separate permission mechanisms layered on each other: hooks.py permission hooks, the @protected decorator (described in its own docstring as “stupid” and “magical”), @role_guard decorators, action_guard decorators, and Press Permission Groups. These don’t integrate. If a user accesses documents through standard Frappe APIs instead of Press’s custom APIs, the granular permissions are bypassed entirely.

Frappe Press also completely ignores Frappe’s built-in DocShare feature — a search for frappe.share in the Press codebase returns zero results. They built an entirely separate permission system from scratch because the built-in one couldn’t serve their use case. The framework’s signature feature, replicated from the ground up, inside the framework’s own flagship product.

Frappe Cloud’s own documentation admits: “Implementation of access controls is not complete. There maybe [sic] hiccups here and there.” When the people who built the system feel the need to disclaim their access controls in the docs, the signal is clear.

The Hooks Are Not Middleware

Here’s what’s less obvious: Frappe doesn’t have a real middleware layer.

Every modern Python web framework has middleware. Django has middleware classes. Flask has before_request/after_request. FastAPI has dependencies and middleware. The pattern is universal because cross-cutting concerns — auth, rate limiting, request logging, tracing, CSRF — belong in one place that runs on every request.

Frappe has hooks (before_request, after_request) that are roughly middleware-shaped, but they don’t compose, don’t short-circuit cleanly, and don’t have a documented ordering contract between apps. There’s no idiomatic place to plug in OpenTelemetry, no clean way to layer a rate limiter, no standard pattern for tenant-scoped request validation. Teams either monkey-patch frappe.handler, fork the routing layer, or stand up a separate proxy in front of the bench to do what middleware would do in any other framework.

This creates a subtle but critical gap. Permission evaluation happens at the API surface — middleware checks permissions before reaching the controller. That’s the surface-level defense. But internal code paths — doctype.validate(), whitelisted server methods, background job handlers — bypass that middleware entirely. The permission checks inside those functions are manual, opt-in, and inconsistently applied. Every write operation that calls validate() or a whitelisted method needs its own permission evaluation. If a developer forgets, there’s no safety net.

The downstream effect: observability is bolted on per-app rather than installed once. Auth additions live in scattered hooks. Cross-cutting changes touch dozens of files. And the API looks properly guarded from the outside while the backend logic runs with whatever permissions the calling context happens to have — which is often Administrator because so much internal code assumes it.

The Runtime Tax: Workers, Memory, and WSGI

Frappe’s background workers — the RQ-based job runners — are not standalone Python processes. They are Frappe processes. Every worker boots the entire Frappe runtime: load frappe.controllers, hydrate every DocType meta, connect to MariaDB and Redis, register hooks, populate frappe.local. Then, after all that, it pulls a job off the queue.

This means a worker doing nothing — idle, waiting for a job — sits at hundreds of MB of resident memory. Spawn ten workers and you’ve spent gigabytes before processing a single job. Want a lightweight job runner that just sends an email or hits an API? Doesn’t exist. Every job pays the framework tax.

Frappe uses gc.freeze() — Python’s garbage collector freeze — to reduce memory usage in its Gunicorn workers. The idea is sound: freeze all pre-fork objects into a “permanent generation” that the GC ignores, so forked worker processes can share memory via Copy-on-Write. Frappe claims ~50MB+ savings per worker.

The cost isn’t in the savings. It’s in what happens after. gc.freeze() permanently exempts pre-fork objects from collection. Any object created after the freeze that holds a circular reference back to something frozen becomes immortal. The GC will never collect it.

The gthread Complication

When we looked deeper, we found the real problem isn’t just gc.freeze() — it’s how it interacts with the worker class Frappe actually ships with. bench start runs Gunicorn with gthread as the worker class:

/workspace/frappe-bench/env/bin/gunicorn
- "-b"
- "0.0.0.0:{{ .Values.frappeBench.service.containerPort }}"
- "--workers=2"
- "--threads=4"
- "--worker-class=gthread"
- "--log-level=debug"
- "--max-requests"
- "5000"
- "--max-requests-jitter"
- "500"
- "-t"
- "120"
- "--graceful-timeout"
- "30"
- "frappe.app:application"
- "--preload"

With gthread, workers don’t fork — they use threads. gc.freeze() was designed for forking workers where Copy-on-Write gives you the memory benefits. With threads, the Copy-on-Write advantage disappears entirely. You get the freeze’s cost without its benefit.

The result isn’t a literal one-way leak — max-requests 5000 does eventually restart workers and release memory. But the behavior is unpredictable. Between restarts, memory drifts upward as frozen GC exempts objects from collection and thread-local state accumulates. HPA can’t reliably scale down because memory usage doesn’t correlate with actual demand. You’re left with pods that should have scaled down but can’t, because the application’s memory behavior is opaque and non-recoverable within a worker’s lifetime.

The Code That Freezes Everything

The real culprit lives in Frappe’s own codebase:

def freeze_gc():
    global _gc_frozen
    if _gc_frozen:
        return
    # Both Gunicorn and RQ use forking to spawn workers. In an ideal world, the fork should be sharing
    # most of the memory if there are no writes made to data because of Copy on Write, however,
    # python's GC is not CoW friendly and writes to data even if user-code doesn't. Specifically, the
    # generational GC which stores and mutates every python object: `PyGC_Head`
    #
    # Calling gc.freeze() moves all the objects imported so far into permanant generation and hence
    # doesn't mutate `PyGC_Head`
    #
    # Refer to issue for more info: https://github.com/frappe/frappe/issues/18927
    gc.collect()
    gc.freeze()
    # RQ workers constantly fork, there' no benefit in doing this in that case.
    _gc_frozen = True

Notice the comment: “Both Gunicorn and RQ use forking to spawn workers.” That was true for the original design. But gthread doesn’t fork. The code makes an assumption that doesn’t match the deployment reality.

Kubernetes HPA Breaks

This has a real operational consequence: HPA doesn’t work. Kubernetes Horizontal Pod Autoscaler scales down when application workload drops and memory usage decreases. Normal applications release their resources. Frappe workers don’t.

The memory stays elevated after peak load. HPA sees high memory and keeps replicas at their maximum. If new nodes are created for this workload, they’re preserved instead of recycled. The result is higher costs — you’re paying for pods that should have scaled down but can’t because the application never releases what it’s taken.

The only reliable fix is lowering --max-requests so workers restart more frequently, reclaiming memory before it drifts too far. But this trades memory predictability for CPU overhead — you’re constantly recycling workers instead of letting them settle. It’s a band-aid, not a fix. The underlying issue remains: gc.freeze() on non-forking workers is a design mismatch.

If HPA doesn’t work, you need to fall back to static replica counts. You’re manually sizing your cluster based on peak load that may not repeat for months, paying for it 24/7. That’s not how you operate on Kubernetes. That’s how you operate a monolith that forgot it was supposed to be cloud-native.

The Smoking Gun in Press’s Codebase

The fingerprint of these issues is visible in Frappe Cloud’s own code. Press ships scheduled functions named fail_stuck_press_jobs, fail_old_jobs, and mark_stuck_updates_as_fatal. There is a function whose entire purpose is to fail jobs that got stuck. Step timeouts are hardcoded to 18000 seconds (5 hours) for individual migration steps — long enough that even a memory-leaking worker will probably finish before being killed. These aren’t features. They’re admissions in code form.

And beneath all of this sits WSGI — one thread per request, no async I/O. When we tested the system with profilers using py-spy on a simple /ping method call (which only returns “pong”), it revealed that the actual user code handling the request accounts for only 53.63% of request time. Framework initialization takes another 23.86%. Response serialization takes 22.49%. That’s a ~46% framework overhead on every request, before your code runs.

ASGI — the async successor specification adopted by Django, FastAPI, and Starlette — handles concurrent connections via async I/O instead of thread-per-request. Frappe cannot adopt it. Its ORM, database layer, and numerous dependencies are synchronous-only. Even if you deployed Frappe behind an ASGI server, the framework would weigh it down and make it slower than pure WSGI. The path to async is permanently blocked by architectural decisions baked in at the foundation.

The Developer Experience: Python Without the Benefits

Type hints exist in Python. Pydantic exists. mypy, pyright, and every static analysis tool in the ecosystem exist. Frappe’s codebase simply doesn’t use any of them.

DocType controllers receive self of type Document — a generic bag where every field is Any. Whitelisted methods take untyped **kwargs from HTTP. frappe.get_doc() returns Document, regardless of which DocType you asked for. There’s no SalesOrder class with typed fields you can autocomplete; there’s a runtime-resolved blob you guess your way through. The framework predates the typing era and has not retrofitted. Newer Frappe code adds annotations sporadically, but the core APIs and the auto-generated DocType layer remain untyped. You write Python in 2026 with the developer experience of Python in 2014.

What this means in practice:

  • IDE autocomplete is useless on DocType fields. You memorize them or grep the JSON.
  • Refactoring is dangerous. Renaming a field requires a project-wide string search, not a type-aware rename.
  • Bugs that mypy would catch in five seconds — typo in a field name, wrong argument order, returning None where a string is expected — surface only at runtime, often in production, often inside a background job that fails silently.
  • Library APIs are inconsistent. frappe.db.get_value() returns Any | None | tuple | dict depending on arguments. frappe.get_all() returns a list of dicts unless you pass as_dict=False. The same function mutates its return shape based on flags.

This isn’t just about developer comfort. It’s about correctness. A typed framework catches entire categories of bugs before they ship. Frappe catches them in production, if it catches them at all.

The Escape-Hatch Culture

When the framework’s primitives can’t express what a developer needs, the codebase reaches for escape hatches. Three show up everywhere, and each one corrodes the layer it punches through.

skip_roles() — when role checks fail, skip them. Press’s own codebase carries the function with a TODO to remove it. Production code calls it because the role engine doesn’t model the actual access pattern. Every call is a permission check that didn’t happen.

frappe.db.commit() in the middle of business logic — Frappe wraps each request in a transaction that commits at the end. When that’s inconvenient, code calls frappe.db.commit() mid-flow. Now you have partial writes that survive a later failure. Atomicity becomes a suggestion.

ignore_permissions=Truefrappe.get_doc(...).save(ignore_permissions=True). Sprinkled through ERPNext and community apps. Means “I know permissions exist, I’m telling them to stay out of my way.” Often correct (system-level code shouldn’t be subject to user-level checks), often wrong (a hot path quietly bypassing the permission model because somebody hit a bug in 2017 and never looked back).

These aren’t anti-patterns the community invented. They’re the framework’s recommended workarounds when its abstractions don’t fit. The cost: every escape hatch is a place where the guarantees you thought you had — auditability, atomicity, authorization — silently don’t apply.

Locked In: Redis, MariaDB, and No Way Out

Frappe’s job queue is RQ, which sits on Redis. Redis is also the cache layer, the session store, the SocketIO backplane, and the pub/sub bus. Lose Redis and Frappe doesn’t degrade gracefully — it falls over. There is no pluggable queue backend. Want SQS, RabbitMQ, NATS, or a database-backed queue with transactional outbox semantics? Not supported. Redis is hardcoded into the architecture, and a queue backed by an in-memory store means a Redis crash or eviction loses jobs that were enqueued but not yet picked up.

The database story is worse. Frappe officially supports MariaDB. Postgres support exists but is second-class and full of holes — community threads catalogue migration failures, missing features, and DocTypes that ship MariaDB-specific SQL. The framework’s own raw frappe.db.sql() calls (recall the 694 instances across 263 files) frequently use MariaDB syntax — backtick identifiers, INSERT ... ON DUPLICATE KEY UPDATE, MariaDB-specific JSON functions — that doesn’t translate to Postgres.

The implication: if your team has Postgres expertise, Postgres tooling, Postgres-based observability, or a compliance requirement that mandates Postgres, Frappe is not a real option. You’re picking the framework and its database, and the database choice is MariaDB whether you wanted it or not.

And here’s the kicker: you can’t even get a managed MariaDB on two of the three major cloud providers. AWS RDS is the only major cloud that still offers managed MariaDB. Azure dropped their managed MariaDB offering. GCP Cloud SQL never supported it. If you deploy Frappe on Azure or GCP, you’re running MariaDB on a VM — managing patches, backups, HA, and scaling yourself. The framework that promises “batteries included” makes you build the database ops layer from scratch on most cloud platforms.

Migrations make the lock-in worse. Frappe’s bench migrate has no rollback. Schema changes happen as a side effect of DocType JSON edits — modify a DocType, run migrate, and Frappe diffs the JSON against the live schema and applies whatever ALTER statements it thinks are needed. There is no bench migrate --rollback 1. If a migration breaks, your options are: fix forward, restore from backup, or surgically reverse the schema by hand.

This also rules out rolling deployments. A Frappe upgrade requires bench migrate, which requires schema changes against a database that can’t tolerate two app versions reading it simultaneously. Production deploys are stop-the-world: drain traffic, run migrate, restart bench, hope. Zero-downtime deployment — table-stakes for any service-grade web app in 2026 — is structurally not on the menu. Every release flips the site into maintenance mode, serving a holding page to every user while the migration runs. For a multi-tenant Frappe Cloud setup with hundreds of sites, it’s an outage propagated across every tenant in sequence.

A Frontend Frozen in 2015

In 2026, Frappe’s desk interface — the primary UI that users and developers interact with — is built on jQuery and Bootstrap.

Let that sink in. The JavaScript ecosystem has gone through Angular, React, Vue, Svelte, Solid, and back again. Component-based architecture, virtual DOMs, reactive state management, TypeScript, server components — a decade of frontend innovation. Frappe’s desk is still manipulating the DOM with $('.some-class').toggle().

Here’s the nuance: Frappe does have Frappe UI, a Vue.js + TailwindCSS component library used in newer products like Gameplan and Frappe Cloud’s dashboard. The problem isn’t that the frontend is frozen — it’s that the core framework experience is frozen while newer products got the modern treatment. There’s a split-brain developer experience: new Frappe products use modern tooling. The framework itself does not. Client scripts are written in a jQuery-flavored API (frm.set_value, cur_frm.refresh_fields) that feels like writing code for Internet Explorer 9. If you’ve hired frontend developers in the last five years, they’ve never written jQuery professionally and shouldn’t have to start now.

This creates a split-brain developer experience. New Frappe products use modern tooling. The framework itself does not. Client scripts are written in a jQuery-flavored API (frm.set_value, cur_frm.refresh_fields) that feels like writing code for Internet Explorer 9. If you’ve hired frontend developers in the last five years, they’ve never written jQuery professionally and shouldn’t have to start now.

Upgrade Roulette

Version upgrades in Frappe/ERPNext are not migrations — they’re expeditions. The forum is littered with battle reports:

The pattern is consistent: new versions change database structures in ways that custom apps and even stock migrations can’t handle cleanly. If you have custom DocTypes, custom fields, or — heaven forbid — direct database modifications, every major upgrade is a roll of the dice. For an ERP system — software that businesses depend on for accounting, inventory, and payroll — this is not an inconvenience. It’s a liability.

Contributing: Abandon All Hope

Open source lives and dies by its contributor experience. Frappe’s is described, by contributors themselves, as “a total waste of time”.

PRs vanish into the void with no feedback for months. There’s no way of knowing which users are maintainers versus ordinary community members. Some PRs merge in a day. Others take three months with no predictable path to resolution. When your established developers describe contributing as frightening and wasteful, you don’t have a contributor pipeline — you have a contributor graveyard.

The Truth About Frappe

Here’s what I wish someone had told me before I started: Frappe is a framework built by developers who needed something for themselves, not a framework built for developers who chose it.

That’s not a criticism — it’s an observation. The DocType wizard is brilliant because its creator needed it. The permission layers grew organically because Frappe Technologies needed them. The gc.freeze() optimization exists because Frappe Cloud’s infrastructure needed to handle scale. Every design decision served its original problem well.

The problem is that those problems were never revisited from a clean-slate perspective. There was no “what if we designed this again today?” moment. There was only “what works for our current customers?” and that’s a fundamentally different philosophy.

The result is a framework that is genuinely impressive in a demo and genuinely painful in production at scale. Not because the developers are incompetent — they’re not. Because every clever shortcut, every patch, every workaround that solved a real problem yesterday creates a new problem today. And the pattern of patching without redesigning means those problems compound.

Who Might Still Benefit

To be fair — Frappe can work if:

  • Your application is simple enough to stay within the 97% ORM coverage
  • You don’t need deeply nested data models
  • Your team is small and can absorb the bench tax
  • You’re building an internal tool where jQuery-era UI is acceptable
  • You won’t need to upgrade major versions frequently (or at all)
  • You don’t plan to contribute upstream

That’s a narrow window, and it gets narrower as your application grows.

What Comes After

If you need a Python full-stack framework: Django with Django REST Framework gives you a mature ORM with eager loading, a massive ecosystem, and a contributor experience that’s the gold standard of open source. If you want something lighter and async-ready: FastAPI with an async ORM gives you type safety, performance, and modern tooling out of the box.

But here’s the honest part that most framework comparisons skip: you’re not limited to Python. If what you need is rapid business application development, the question shouldn’t be “which Python framework is best?” — it should be “what tool is best for this job?”

Frappe will save you time for the first two weeks. Then you’ll spend that time back, with interest. The most expensive software isn’t the kind you pay for. It’s the kind that’s free until you’re too deep to leave.

And honestly? I’d rather be deep in a framework that was designed from day one to handle the things that actually matter — not one that was patched into shape by people who needed it to work yesterday.

Dipankar Das

Dipankar Das

Designing & Building Scalable, Reliable Systems