Summary
I’ve been off the radar on DAG Studio for a few days, and for a good reason: I’ve fallen down a rabbit hole of “Agency.”
I’m currently building a high-agency environment where the LLM can actually perceive state, plan multi-step sequences, and manage itself. I’m not interested in building another chatbot you talk to in a window; I want something that doesn’t just respond, but acts.
To make this happen, I’m pairing the new Gemma 4 with the deepagents framework. The goal is an agent that can run its own tools, schedule future tasks via Redis, and proactively ping me on Discord when it has an update or needs a decision.
The Grind
Getting this working has been a bit of a grind. I’ve spent the last few days hammering through the entire stack: Jinja templates, llama-server, ollama, LangChain, and deepagents. I’ve been swapping between ChatOpenAI and ChatOllama wrappers, and I eventually ended up writing my own custom Chat wrapper for Gemma 4 just to handle my specific requirements.
Why bother?
Because I want a fleet of specialized agents to automate my dev environment, and more—and eventually, handle some of the heavy lifting on DAG Studio itself.
I’m moving away from the boring “Prompt → Response” loop and moving toward “Trigger → Action → Reflection.” It’s a bit chaotic, it’s fun, and it’s exactly where I want to be.
The Meta
- Put together this post