Tokens represent a powerful new technology that opens up a vast design space. However, we are still in the early stages of fully understanding and improving token design. Think of token design as navigating uncharted waters: we're pioneering a new landscape with enormous potential, but there’s no established map yet. Just as early explorers developed essential navigation tools to safely travel and chart unknown territories, the Web3 community is gradually establishing best practices to guide effective token models.
And I'll start that map by giving 7 key insights to consider in token design:
The most common pitfall in token design is developing complex models without a specific purpose. A token model isn’t "good" or "bad" on its own, it’s effective if it meets its objective, and ineffective if it doesn’t.
For example, Axie Infinity, initially designed its token model to reward players with in-game currency for playing. However, the system struggled with hyperinflation as players amassed more tokens than could be absorbed by the game’s economy. This illustrated a key issue: the objective of the token wasn’t clear. Was it to attract new players, reward existing players, or create a sustainable in-game economy? A clearer objective might have led to a more balanced design that could better withstand high demand.
Tip: Before designing token mechanics, clearly define the token's core purpose. For a gaming project, the goal might be to drive player engagement and reward loyalty, while for a governance token, it could be to incentivize active participation and decision-making. This core objective should shape every aspect of the token’s design, ensuring alignment with the project’s mission.
When building something new, it’s essential to study existing models objectively rather than imitating popular projects without critical assessment. Often, teams evaluate token models based on the token price or project popularity, but these metrics don’t necessarily reflect the model’s design quality.
Consider Curve Finance, a stablecoin-focused DEX. Its token model is built around "veCRV," which incentivizes users to lock their tokens for extended periods. This design has been widely admired for its ability to align long-term incentives. However, copying the “locked token” concept without understanding Curve’s specific goals, stability, and deep liquidity for stable assets, could lead other projects to make design errors, as not all projects benefit from long lock-up periods.
Tip: Study other token models with a clear focus on their objectives and outcomes, rather than simply assuming that a popular model will work for your project. Each token model has unique goals and constraints.
Every token model relies on a set of assumptions: about user behavior, system constraints, and economic factors. Articulating these assumptions clearly is critical, as they underpin the design.
For instance, Helium, a decentralized network for IoT devices, assumed that network participants would act rationally, seeking the most efficient locations for hotspots to maximize coverage and rewards. However, as demand grew, hotspot placements became concentrated, reducing network efficiency. The team’s initial assumptions about rational behavior didn’t fully align with how users actually behaved.
Tip: State assumptions explicitly and verify that they align with your model’s goals. If you assume users will act in certain ways, ensure those assumptions are realistic. This transparency will help avoid unintended outcomes and make it easier to adjust the model if assumptions prove inaccurate.
Just because you’ve defined assumptions doesn’t mean they will hold true in practice. Validation is essential to ensure that assumptions match real-world conditions.
A good example is Bitcoin’s security assumption: that over 51% of hash power remains honest. This assumption has generally held, but smaller blockchains with less hash power have been 51%-attacked because their validation assumptions didn’t match reality. For example, Ethereum Classic suffered a 51% attack in 2020 due to a lower hashrate, illustrating that the assumption of honest hash power doesn’t apply uniformly across all networks.
Tip: Use techniques like statistical modeling, agent-based simulations, and testnets to validate assumptions. Empirical testing can help catch issues before they become critical vulnerabilities in the live network.
Clear boundaries separate different components of the token model, making the system easier to manage and scale. They help reduce interdependencies between parts, minimizing the risk of conflicts and bugs.
For instance, Uniswap provides a clear separation between its token (UNI) and its core protocol functionalities. The UNI token is primarily a governance token, while the automated market maker (AMM) mechanics remain independent. This abstraction enables the protocol to evolve while keeping UNI’s role in governance clear and focused.
Tip: Create well-defined abstraction layers to simplify your token model and make it easier to extend. This will help different teams work independently and reduce the risk of introducing unintended behaviors through complex dependencies.
External factors outside the protocol’s control can make token models unstable. Designing around variables like token price, hardware costs, or network latency can lead to vulnerabilities if those variables fluctuate unpredictably.
For instance, Ethena relies heavily on external factors like funding rates to maintain stability in its protocol. To manage potential risks from negative funding rates, Ethena has established a reserve fund as a financial buffer. This reserve can cover losses if funding rates turn negative for extended periods, protecting the protocol’s core operations. However, not all projects have similar contingency measures in place, highlighting the importance of designing token models that account for external fluctuations and maintain resilience.
Tip: Minimize reliance on external variables where possible. If your model depends on them, have contingency plans for significant shifts, or design mechanisms that can adapt to changing conditions.
Token design is dynamic. Every adjustment or feature addition can have ripple effects across the system. Without revalidation, even minor tweaks can create unforeseen vulnerabilities.
An illustrative example is Compound, a DeFi lending protocol. In 2020, Compound introduced a new governance model and token incentives. However, small modifications in the incentive structure led to users “farming” COMP tokens at scale, pushing up supply and lowering value. Compound quickly learned the importance of revalidating the token model with each change.
Tip: After every change in the token design, revalidate to ensure the model still meets its objectives. This avoids the risk of creating new issues while solving old ones and helps maintain a stable token economy.
Designing tokens effectively is both an art and a science. Without a "clear map" (yet), these guidelines help create a structured approach to token design. The key to a successful token model is clarity of purpose, rigorous validation, and flexibility to adapt as new challenges emerge. By following these principles, token designers can build systems that are not only functional but resilient in the face of change.
Because tokenomics isn’t an exact science, having expert support can be a game-changer when building a Web3 project. At Smart-chain, our tokenomics team has worked with numerous projects and witnessed many founders underestimating the complexity of tokenomics, thinking it’s just a matter of pie charts and vesting schedules. In reality, market conditions and external variables can easily disrupt a project’s success if these factors aren’t carefully planned for. That’s why we recommend taking all these insights seriously when designing tokenomics to build a resilient and adaptable model.