Ethereum co-founder Vitalik Buterin says AI can aid create more proficient decentralized governance models and enable users to make more informed decisions.
Buterin he said in a Sunday post on
“The usual solution, delegation, disempowers. It leads to a small group of delegates controlling the decision-making process, while their supporters, after pressing the delegation button, have no influence,” he said.
His estimated that average DAO participation rates range from 15% to 25%. This can lead to problems such as centralization of power and ineffective decision-making. Worst case scenarios can result in management attacks where a bad actor acquires enough tokens to pass a malicious proposal without other members noticing.
AI-powered assistants who vote for you
Buterin proposes that personal assistant huge language models (LLMs) could aid solve the “attention problem” by providing users with the relevant information needed to vote.
“If the governance engine depends on you to make a large number of decisions, a personal agent can make all the necessary votes on your behalf based on the preferences he or she infers from your personal writing, conversation history and direct statements,” he said.
“If an agent is unsure how you would vote on an issue and believes it is important, he or she should ask you directly and provide all relevant context,” Buterin added.
Lane Rettig, a researcher at the Near Foundation who specializes in artificial intelligence and governance, told Cointelegraph last year that the nonprofit was working on a similar idea: AI-powered digital twins that vote on behalf of DAO members to solve the problem of low voter participation.
Privacy is an critical aspect that must be maintained
According to Buterin, another challenge with highly decentralized management arises when key decisions depend on private or sensitive information, such as during negotiations, internal disputes or financing choices.
Related: Vitalik Buterin provides simulated transactions to raise cryptocurrency security
“Typically, organizations solve this problem by appointing people who have a lot of power to take over these tasks,” he said.
He added that an alternative solution could be for users to put their “personal LLM in a black box, the LLM sees the private information, makes a rating based on that and gives only that rating. You don’t see the private information and no one else can see the contents of your personal LLM.”
“All of these approaches assume that each participant is using much more information about themselves and potentially submitting much larger inputs. This makes privacy protection all the more important,” Buterin said.
Warehouse: IronClaw competes with OpenClaw, Olas launches bots for Polymarket – AI Eye
