mt logoMyToken
ETH Gas
EN

From DAO Power Struggles to AI Agent Coordination

interview-mic-blackgreen

Q1. You’ve described governance as Web3’s pressure chamber. When a protocol faces a real crisis, e.g., concentrated voting power, token price shocks, or a security incident, what predictable failure modes tend to surface first, and why?

The honest answer is that it depends entirely on the type of crisis… They are all different beasts.

For example, when concentrated voting power meets a crisis, you see what I call the “coordination vacuum.” Large token holders freeze. They’re calculating their exposure across positions.  Meanwhile, smaller holders are screaming on Discord, but their votes don’t move the needle. The protocol enters this bizarre state where technically, governance is functioning,  but practically, no decisions get made.

We saw a version of this play out with Cardano last year, when a single DRep known as “Whale” accumulated enough delegated voting power to blanket-veto every proposal from IOG, Cardano’s core development company.

Token price shocks produce a completely different failure cascade, and it’s much more visceral. What you see is a cliff. It starts with sell pressure spreading across node operators and token holders, and institutional holders start making OTC withdrawals. Then the retail withdrawals escalate, and suddenly you’re in bank-run territory. This is exactly what happened with Terra in May 2022, and because blockchain is transparent, everyone could watch the run happening in real time.

This is also why exchanges like Binance have built safety levers into their systems. They conduct periodic reviews across multiple dimensions, such as trading volume, project activity, security, and regulatory compliance, and they’ll flag tokens with monitoring tags or delist them when early warning signals emerge. These mechanisms exist precisely because the industry has learned, painfully, that some of these failure cascades are predictable.

Q2. Your comparative study of voter behaviour in Curve and Polkadot challenged many assumptions. What were the single most surprising empirical findings, and how should DAOs change their mental models of “active” vs “representative” governance as a result?

Two findings genuinely challenged my assumptions. When we studied user personas in governance, we categorized voters by the size of their holdings:  whales being the top 1%, sharks the next 5%, all the way down to shrimp with the smallest holdings.

In Polkadot, 93% of whales and 98% of sharks locked their tokens for 14 days or less, while smaller holders committed to far longer durations. In Curve Finance, we found a similar pattern. Even with gauge rewards pushing 67.2% of all voters toward the maximum four-year lock, the largest holders still consistently locked for a shorter duration. Conviction mechanisms don’t constrain the people they’re designed for.

The second was voter turnout. In Curve, 38% of all locked tokens were used for voting. In Polkadot? 0.11%. Staggeringly low. Although both systems have conviction voting, Curve Finance’s gauge voting financially rewards participants. Polkadot asks you to lock tokens out of civic duty. The data shows civic duty alone doesn’t scale.

The mental model shift I’d advocate for is to stop treating DAO  participation as a virtue signal and start treating it as an economic design problem. An important question to ask before we design is: Why would a rational actor lock their capital to vote?

Q3. You used novel quantitative methods to map user behaviour. For non-technical readers, how did you measure influence, coordination, and fragmentation, and what metrics should projects start tracking today?

My approach is always the same: bring research from other fields into blockchain and make it consumable for decision-makers.

For measuring governance maturity, I created the Governance Transparency and Engagement Index at Filecoin. It tracks four categories, such as published artefacts like committee charters and decision logs, core developer transparency, governance communications, and transparency community reports — each weighted differently. Every metric has an anti-spam cap, and leadership gets a single monthly score between 0 and 1, tracked quarter over quarter.

We also built the Polygon’s admissions scoring framework for validators with stake weighted at 45%, experience at 25%, and expertise at 30%. We validated it with Pearson correlation showing experience positively predicts on-chain performance. Expertise was assessed through timed, randomized technical evaluations.

What metrics to track? Measure how your tokens are distributed and who holds power.. Every protocol says it’s decentralized. Almost none put a number on it.

Most importantly, stop measuring how many people vote in governance and start measuring how many discussions actually produce a decision. Turnout is a vanity metric. Convergence is what matters.

Q4. In your MINA and Liberdus treasury designs, you modelled attack surfaces and recommended phased decentralisation. Walk us through a concrete example. How do you trade off treasury access, operational velocity, and security during those phases?

When working with Mina Protocol’s treasury governance, I analysed real on-chain tokenholder distribution using BigQuery and stress-tested governance parameters against actual ownership concentration. I then modelled viable attacks such as buy-vote-dump and delegation capture under realistic turnout scenarios

This informed phased decentralisation: early safeguards protected treasury integrity while preserving operational velocity, with controls gradually relaxed as distribution and participation strengthened. Treasury access expands based on demonstrated economic resilience, not assumptions.

Q5. The Foundation Vs Community tension remains unresolved across many protocols. From your experience advising teams, what governance constructs (on-chain or off-chain) actually work to limit undue lab/control power without killing product progress?

This tension is everywhere. The way I approach it and what we’ve built at both Polygon and Filecoin starts with governance pillars. Before you design any mechanism, you define exactly what will be governed and who should have a say in each domain. That distinction alone prevents half the fights.

From there, I build bicameral systems with maker-checker dynamics. When the foundation makes a decision, how does the community check it? This is where we introduced transparency reports and structured accountability. When the community makes a decision, what are the foundation’s veto rights, and under what conditions? Both directions need clear, auditable constraints. Timelocks sit between every decision and its execution, giving either side a window to flag problems without freezing progress entirely.

The other thing I’m deliberate about is that while smart contract upgrades, treasury decisions, and protocol parameters go through the bicameral checks-and-balances, I keep interface-level innovation such as product features, UX, and frontend, independent from governance. Requiring a DAO vote to ship a UI improvement is how you kill product velocity.

Q6. During the Aave controversy, you proposed a resolution pathway. What would a principled, repeatable “dispute resolution” framework for DAOs look like, one that preserves decentralisation but enables decisive action in emergencies?

The Aave controversy was important because it wasn’t actually about CowSwap fees. It was about a structural question every major protocol will eventually face: what is the relationship between the DAO and the teams that build for it, and who owns what? What I saw happening was a governance design question turning into a motive fight.

I’ve seen this pattern repeatedly. Aave sits at the intersection of on-chain governance and an off-chain world of users, regulators, and institutions. You need both a DAO that credibly owns the protocol and its identity, and teams that can ship fast with deep context. They’re complementary roles. But the relationship has to be legible.

So the question I raised wasn’t “DAO vs Labs” like others were. It was: what’s the clean contract between them? I started exploring metagovernance as a way to make that relationship contractual and auditable. Mixing investigation with outrage is how you get poison pill proposals on day four.

Q7. Tokenomics and governance are tightly coupled. How should initial token distributions and vesting schedules be designed to avoid long-term governance capture while still rewarding early contributors and builders?

I think it’s important to separate economic rewards from governance power. Earning a return on your tokens and controlling protocol direction are two different things, and bundling them guarantees plutocracy. Also important to model your vesting schedule as a sell-pressure simulation before you launch a token.

Q8. Your Moltbook analysis maps consensus patterns among AI agents. What parallels do you see between AI-agent coordination and human DAOs, for example, in influence concentration, echo chambers, or coalition formation, and what does that imply for designing machine-scale governance?

When you remove humans from the equation entirely and watch AI agents make decisions, what emerges is uncomfortably familiar.

I analyzed 500 threads and categorized them into four consensus patterns: Unifying Validation, where consensus forms rapidly; Iterative Problem Solving, where it emerges through refinement; Nuanced Convergence, where counter-arguments prevent full agreement; and Fragmented Discourse, where no consensus forms at all. 44% fell into that last category. Nearly half of all governance-relevant discourse produced zero convergence.  In human DAOs, we see identical fragmentation.

Echo chambers emerged too. Agents sharing similar architectures clustered and reinforced each other, the machine equivalent of ideological silos in DAO forums. As AI agents increasingly participate in on-chain governance as delegates or autonomous voters, they will replicate every human failure mode at machine speed. These are coordination bugs, regardless of whether they involve humans or AIs.

Q9. Reputation systems are often proposed as a path to better governance. Where do you see reputation being useful versus dangerous (e.g., reinforcing elites), and what designs or Sybil-resistance primitives do you think are most promising?

Reputation is a meritocratic primitive only when the metrics are objectively verifiable, and the context is strictly bounded. The moment reputation becomes a proxy for ‘trust this person’s judgment,’ you’ve replaced governance with social climbing. Node operators are the clearest case. Uptime, block production, checkpoints signed and there’s no ambiguity.

However, with peer reviews, contribution quality assessments, and subjective evaluations of someone’s work, we inherit every bias that decentralised governance was supposed to dismantle.

On Sybil resistance: Reputation without identity will not scale. This is why zero-knowledge identity is the most promising primitive in the space right now. It lets you prove you’re a unique human without revealing who you are, and has powerful privacy-preserving properties.

Q10. What tabletop exercises, red-teaming approaches, or on-chain simulations should every DAO run before handing meaningful treasury or protocol control to tokenholders?

Sometimes I’m bummed that protocols skip something that’s incredibly basic. Before you design a single governance parameter, pull your tokenholder distribution data and actually look at it. How concentrated is your supply? How many wallets does it take to hit quorum? How many to swing a majority vote? If you don’t know those numbers, you’re designing governance in the dark.

When I worked with Mina Protocol on their treasury governance, we pulled real on-chain data using BigQuery and stress-tested their proposed parameters against the actual tokenholder distribution. That kind of simulation is what allowed us to proactively recommend adaptive quorum biasing.

From there, I map every economically viable attack against the actual distribution such as buy-vote-dump exploits, delegation centralization, vote rental markets, and simulate realistic turnout. Governance design should be driven by economic truth, not idealism.

Q11. For a mid-sized protocol worried about low turnout and vote buying, name three concrete, implementable changes they could deploy in the next 90 days that would measurably improve governance quality.

This is a question I encounter often. Before making changes, the most important step is understanding why turnout is low. The cause can vary, sometimes it is apathy, sometimes the community is still early, and sometimes governance itself has not yet found meaningful product–market fit.

That is why I usually recommend starting with a proper retrospective. Speak directly with the community, analyse participation data, and identify where the friction or disengagement is coming from. The structural changes you implement afterwards tend to be far more effective when they are grounded in that diagnostic work rather than assumptions.

1. Switch from vote to veto. Most governance systems ask tokenholders to actively approve everything. That’s exhausting, and it means proposals get stuck because you can’t hit quorum on things that frankly don’t warrant that level of ceremony. Flip the model. Let proposals pass by default after a deliberation period unless the community vetoes them.

2. Randomize your vote snapshots or demand economic skin in the game. These two go together because they’re both about making vote buying structurally expensive. On the snapshot side: if you take your voting snapshot at a random block within the last several epochs, attackers can’t predict when to acquire tokens. When you’re governing community treasuries and making decisions that affect protocol economics, demanding that voters have real, time-committed capital is essential. This is something I explored deeply in my vote escrow governance research for Filecoin as well.

3. Deploy adaptive quorum biasing. This is something I designed for Polygon’s staked tokenholder signaling framework, and it’s one of the most practical upgrades a mid-sized protocol can make. The problem with fixed quorums is that they’re either too low,  meaning a small group can push things through,  or too high, and nothing ever passes because you can’t get enough people to show up. Adaptive quorum biasing solves this dynamically and it pairs beautifully with the veto model I mentioned in point one.

Q12. What open research questions or governance experiments are you most excited to see in the next 12–24 months? If you could advise three grant funders on where to allocate money in governance research, where would it go?

I think  that it would be  fair to say governance in its current form still has significant gaps, and the next phase will require rethinking some of its core assumptions rather than only refining existing mechanisms.

One area I find promising is AI-assisted contextualisation. Governance proposals are often dense and difficult to interpret, and different stakeholders approach them with different priorities. Systems that can help summarise and contextualise proposals for developers, token holders, or capital allocators could improve both participation and decision quality.

Another area is the use of prediction markets as a signalling layer for governance. They offer a way to surface forward-looking expectations, which could complement voting by revealing how participants assess the likely outcomes of different decisions.

mechanism. Lastly, multi-agent consensus games: how will different AI agents interact with each other, hold reputations, have guardrails, and deliberate to arrive at meaningful conclusions? My recent research analyzing top 500 Moltbook threads showed that AI agents are susceptible to the same social engineering and manipulation patterns as human governance participants.

If I were advising grant funders, one priority would be deeper investment in game-theoretic modelling of governance. Many governance systems still rely on assumptions about behaviour that have not been really rigorously  tested.

Alongside that, I think there is real value in funding structured experiments with different governance models. Controlled trials, simulations, and empirical studies have massive potential help us understand how participants actually behave, and which designs are more resilient in practice.

Disclaimer: This article is copyrighted by the original author and does not represent MyToken’s views and positions. If you have any questions regarding content or copyright, please contact us.(www.mytokencap.com)contact