Introduction: When “simple configs” turn into programming languages.
There are good reasons why JSON has earned its place as the modern software industry's default configuration format. Some of these positive reasons are its simplicity, language-agnosticity, human-readability, and ability to explicitly map the data structures already in use by developers. It is difficult to beat JSON for representing static or hierarchical data, such as API keys and nested parameters. A file can be understood at a glance. JSON’s simplicity is what makes it gain popularity globally. JSON became the standard for infrastructure tooling and CI pipelines because of its simplicity, making it the lowest common denominator for configuration.
But as the systems grew globally, a subtle shift happened. Teams started including behaviour into configs: rule engines, computed fields, templating, expressions, and ordering constraints. In many cases, JSON has gradually evolved from a declarative data format into an implicit programming language that lacks the clarity and tooling of a real scripting language.
The configuration complexity clock: how teams accidentally build languages
Hadlow (2012) observes that configuration models tend to follow a predictable transformation pattern, such as hard-coded behaviour → parameters → rule-based JSON → embedded DSLs → accidental languages. Teams that attempt to maintain simple JSON-based configs often end up rebuilding scripting capabilities in a fragile manner. It often starts with incremental steps: a few flags that seem perfectly reasonable, cross-references to reduce duplication. All seems logical, but little by little, the config becomes a small expression language. Eventually, you are designing evaluators, interpreting strings, and you start receiving bug reports that require you to debug config evaluation.
This is inevitable and not accidental. Product demands grow alongside complexity. This idea resonates with Greenspun's idea (called Greenspun’s tenth rule) that a poorly specified version of Lisp is often reimplemented by sufficiently complex systems. Ironically, configuration systems are particularly prone to this because they are seen everywhere. Therefore, we end up creating systems that extend rule languages and CI/CD pipelines with increasingly complex logic. Language-in-JSON is a code smell; we all know that, and rarely does one start designing a programming language in JSON. But something that began as a configuration can become programming without you noticing.
Why JSON breaksfor behavioural logic
JSON is not designed for execution. It is designed to express structure. Cracks appear when we force JSON to follow a logical structure because there is no clear control flow. Dependencies can also be hidden within nested fields. Validation becomes ad hoc while execution order remains implicit. At runtime, missing keys can break behavior in ways that static checks can’t detect anymore.
Moreover, invalid shapes might slip through because of JSON’s weak typing, and everything is treated as “just data.” Questions like which fields are valid and which operations are permitted become hard to answer. Discoverability is non-existent. To understand the configuration, developers have to either consult documentation (which is usually outdated) or, more likely, resort to reading the source code.
Concerns such as how to visualize what runs first or how to step through JSON files ‒ make debugging borderline impossible. To compensate for this, teams are made to write custom interpreters and one-off tooling. Unfortunately, in the quest to support what is basically supposed to be a simple configuration, they have to reinvent compilers and debuggers. Therefore, the format does not in any way solve the problem but continuously fights it.
Graphs as a first-class configuration model
Graphs provide a strong, comprehensible mental model for execution because they make structure and order explicit: nodes represent computational transformations, edges represent dependencies, and directed acyclic graphs (DAGs) define execution order. This creates clear, inspectable workflows rather than encoding behavior indirectly through conventions and nested objects.
Draw an edge when one step depends on another; leave steps unconnected when they are independent. That’s very clear! This structure reflects what happens exactly, which creates clarity. Workflows are represented directly, instead of requiring developers to decipher layers of indirection in JSON. Graphs are constrained enough to prevent the chaos of unrestricted scripting, yet they are expressive enough to model branches and pipelines. This brings about structural clarity rather than hidden logic.
Practical advantage of graph-based configs
There is an immediate advantage to making execution visible. Operators and engineers are enabled to work collaboratively around a shared artifact, non-programmers do not need to read code to understand workflows, and visual editing becomes intuitive. Safety is instantly improved through strong typing. Nodes declare their inputs and outputs explicitly and developer is unable to connect two incompatible ports. Thus what would be a runtime error with a Json config is prevented completely in edit time. Malformed configs are near impossible.
Dependencies become very obvious. Instead of guessing from nested structures, debugging becomes a simple process of stepping through nodes. Experimental features can be hidden in isolated subgraphs so that it becomes clear if and when they are used. Therefore, the entire system is easier to validate, reason about, and change. Graphs keep the model “clear” instead of “clever”.
Case studies: graphs already winning in production
Graph-based approaches already exist in domains where workflows are iterative and complex ‒ it’s not just a theory. Game development pipelines depend so much on node editors for shaders, animations, behaviors, storyline/quest structure, etc.. The reason creative tooling relies so much on nodes is that visual flow is easy to reason about. It is just clearer than scripts for medium-sized tasks.
Beyond gamedev, a perfect example is set by Generative AI workflows. Node-based pipelines are exposed to chain models and pre/postprocessing steps through GenAI tools like ComfyUI which won in a war against Automatic1111 ‒ a web ui version of a json config. Experimentation with the pipeline is made accessible and faster because users do not write scripts but construct graphs. And because graphs keep complexity legible, these systems can scale as workflows grow. In practice, visibility often matters more than conciseness once workflow complexity becomes central—graphs tend to hold up better than deeply nested configuration models.
Implementation paths: from JSON to graphs
Using graphs does not mean one should completely abandon JSON. JSON may remain a storage format, as it is still easy to read and understand. But let the logic be graph based with a simple visual editor. Execution semantics can be retained in code while graphs are serialized as edges and nodes in JSON. Simple frameworks already exist. It is relatively straightforward to create a visual editors by using frameworks like NodeGraphProcessor, ReactDiagrams, ReactFlow. Incremental migration is practical using the following steps: begin by designing one subsystem as a graph, proceed to explain the types of nodes in code, serialise the structure in JSON, and finally include visualization and validation.
Production quality often takes work, although proof of concept tends to be quick. For example, a proficient senior developer usually requires about three to four months to develop a medium-complexity visualiser that has a reliable editor with functions like undo & redo, copy & paste, validation. The editor must continuously evolve as the language evolves. That editor will also need to evolve as the graph “language” evolves. Even so, the ROI can be strong: fewer runtime errors, better tooling, and improved collaboration.
When to use graphs (and when not to)
JSON or YAML are the best options to consider when a configuration is predominantly declarative or flat (easy settings or static parameters). But when configs are edited frequently, have complicated dependencies and flow structures, and are maintained by non-technical specialists, Graphs come to the rescue.
It is important to identify the tipping point. If you find your developers spending a lot of time trying to “fix a config” ‒ that is a signal. If lots of bugs are coming from configuration ‒ that is a signal. If developers struggle with exposing flexibility options of the app into the config ‒ that is a clear signal to transition to graphs.
Design principles graphs
Just as usual code development, Graph editing process also requires a certain level of discipline to avoid a chaotic hell of “spaghetti” graphs. Excellent debugging tools and visualization, validation and strong typing ‒ are guardrails to prevent errors due to the breach of contract of the language. Constraints imposed by that should not be regarded as limitations, because they make the system maintainable.
Still, the users need to be taught to use subgraphs to encapsulate and reuse logic; to maintain an adequate level of complexity of the graphs and reach out to developers for help if the graphs become unwieldy. Perhaps it is time to add a tiny new feature like “for” loops that can call a subgraph multiple times.
Conclusion
Configs should be handled like programs the moment they start having similarities in structure or performance with programs. Graph-based configurations are safer and more inspectable while being no less expressive then code. Graph-native configuration may become the default as workflows in CI/CD and AI expand in a dynamic manner. System’s behaviour can be made easier and clearer for everyone to understand, rather than hiding programs within data.