.png)
In the rapidly evolving world of real-world asset (RWA) tokenization, Bermuda is positioning itself as a global pioneer. The Bermuda Monetary Authority (BMA) has initiated an ambitious policy effort to bring RWAs within a globally extensible regulatory regime that ensures investor protection while realizing the opportunity to support the growth of safe and sound global onchain capital markets.
At Plume, we're eager to support them in their work. In our detailed, 40-page comment letter submitted in response to the BMA's Discussion Paper on Asset Tokenisation, we provide detailed recommendations for how the BMA can take the lead as global framework for RWA policy matures into a new reality.
Dear Sirs and Mesdames:
Kimber Labs Inc. (d/b/a “Plume Network Inc.” or “Plume”) appreciates the opportunity to submit comments in response to the Bermuda Monetary Authority’s (“BMA” or “the Authority”) Discussion Paper on Asset Tokenisation, published on November 5, 2025 (“DP”).
Executive Summary
As described in further detail below, Plume’s comments to the DP focus on five (5) key areas:
With a mainnet launch in June 2025, Plume’s mission is to bridge traditional finance (“TradFi”) and decentralized finance (“DeFi”) through innovative real-world asset (“RWA”) tokenisation solutions. We bring firsthand expertise in developing compliance-focused, modular ecosystems for tokenising and managing RWAs.
Our core RWAfi blockchain, an Ethereum Layer-2 blockchain with built-in anti-money laundering, countering the financing of terrorism, and sanctions screening, has achieved over $330 million in total value locked. Plume users account for over 280,000 (non-stablecoin) RWA wallets accounting for >50% of all wallet addresses holding non-stablecoin RWAs. Plume investors include Apollo Global Management, Brevan Howard Digital, Galaxy Ventures, Haun Ventures, and SV Angel as well as Animoca Ventures, LayerZero Labs, Superscrypt, YZi Labs, among other reputed names from traditional finance and crypto. Plume is also a participant in Mastercard's Start Path program for fintech start-ups.
In addition to RWAfi, we also have developed a suite of tools to scale the adoption and integration of RWAs into the global digital economy:
Plume also recently registered a transfer agent (Kimber Transfer Agency LLC) with the SEC. The Plume transfer agent manages securities shareholder records (both onchain and offchain), distributes dividends/yield, and enables tokenisation. The transfer agent is supported by Plume’s onchain fund administration services, designed to accelerate the growth of onchain capital markets, particularly through 1940 Investment Company Act registered funds. Transfer agent services are currently in the process of being integrated into Nest-related services and Arc.
We agree with the Authority that “Bermuda's digital asset framework already provides the legal basis for tokenisation by classifying it as a regulated digital asset business activity” and that “Bermuda’s predominantly principles-based digital asset regime offers significant adaptability and supports innovation, the complex and rapidly evolving domain of tokenisation presents unique challenges that may introduce regulatory uncertainty.”
We applaud the Authority’s efforts to bring additional regulatory certainty to the “unique challenges and opportunities” prompted by the rise of RWA tokenisation. Below, we submit comments in response to specific questions posed in the DP.
Q1: How likely are you, your clients or the entities you advise to issue, invest in or engage with tokenised assets in Bermuda? Additionally, have you observed discussions, interest or demand for tokenisation among your clients or the sectors within which you operate?
Plume has high regard for Bermuda's key role in finance, insurance, and digital assets, and the Authority's expertise derived from its central role in helping supervise these markets. The quality, rigor, and detail of the questions posed and the issues raised by the DP reflect the Authority’s expertise. We are eager to collaborate on a scalable regulatory framework for safe, secure RWAs in a Bermuda-centered global onchain capital market.
Q2: Activities relating to tokenised investments may, in certain instances, be subject to both the DABA and IBA frameworks. In your view, what are the implications of this dual regulatory approach for regulatory efficiency, innovation and market development?
We recommend that tokenised investments be subject to a new legislative framework that would adopt an activities-based regulatory approach tailored to the risks and opportunities of this emerging market. Whether a particular registrant is registered under the DABA or IBA the tokenised investment framework would apply to the extent the registrant engaged in tokenised investment activities. The goal of such an approach should be to ensure the same regulatory outcomes.
Q3: Is the current definition of “investment" under the IBA sufficiently clear to encompass tokenised investments? What specific aspects of the definition require further clarification or enhancement?
The current definition of "investment" under the Investment Business Act 2003 (IBA) appears sufficiently broad to potentially encompass many forms of tokenised investments, as it includes assets, rights, or interests specified in Part 1 of the First Schedule—such as shares, debentures, contracts for differences, and rights or interests in other investments. This could extend to tokenised assets that confer similar rights or economic exposures, whether native to distributed ledger technology (DLT) or as digital representations of traditional assets, helping to mitigate regulatory overlaps with the Digital Asset Business Act 2018 (DABA).
However, to enhance clarity and support innovation without creating regulatory arbitrage, we recommend further guidance or enhancements to the definition in the following areas:
Q4: Do you agree with the Authority’s approach to focus on the substance rather than the form of tokenised investments? Are there practical challenges in applying this principle?
Yes, we agree with the Authority’s approach to focus on the substance rather than the form of tokenised investments. This principles-based method ensures regulatory consistency across traditional and digital assets, fostering innovation without undue barriers. While practical challenges exist—such as assessing substance in hybrid token models—they can be mitigated through targeted guidance and stakeholder collaboration.
Q5b: Should digital twins and native tokens be subject to distinct regulatory frameworks or approaches to account for their differences? Why or why not?
Yes, digital twins and native tokens should be subject to distinct regulatory approaches to reflect their inherent differences. Digital twins inherently introduce credit risk due to reliance on intermediaries for linking tokens to offchain assets, necessitating stronger investor protections around issuer solvency. Native tokens, being fully onchain, pose more systemic risks related to DLT infrastructure, warranting focus on technological resilience rather than credit mitigation.
Q5c: If distinct regulatory frameworks or approaches are deemed necessary, how should the frameworks/approaches differ to effectively address the unique characteristics and functions of digital twins and native tokens?
Frameworks for digital twins should emphasize tailored disclosures on credit risks (e.g., issuer financial health, asset backing verification) and mandate corporate structures that enhance segregation, such as Bermuda's time-tested Incorporated Segregated Accounts (ISA) regime, which ensures legal and operational isolation of assets from issuer liabilities, promoting bankruptcy remoteness - as well as determining entitlement of tokenholders in accordance with the Register of Account Owners. For native tokens, approaches could prioritize onchain governance standards, audit requirements for smart contracts, and interoperability guidelines, avoiding overly prescriptive credit-focused rules that might stifle innovation.
Q7: Do you think any activities performed by tokenisation platform providers, such as the structuring of tokenised investments, fall under existing investment activities?
Yes, certain activities performed by tokenisation platform providers such as facilitating the creation, management, support, existence, and distribution of tokenised investments representing real-world assets (RWAs), as well as customer authentication or acting as an agent for issuers (e.g., controlling token burning, freezing, or transaction revocation) may fall, at least partially, under existing investment activities as defined in the IBA.
For instance, we can appreciate the viewpoint that platforms enabling investors to buy, sell, subscribe to, or underwrite tokenised investments could be classified as "arranging deals in investments" or "dealing in investments" under Part 2 of the First Schedule of the IBA, particularly when these activities bring about transactions in investments.
However, the regulatory approach should remain principles-based to foster innovation and avoid redundancy, especially where providers are already regulated under the DABA or comply with IBA requirements for tokenised investments. This would prioritize substance over form, focusing on key drivers like investor protection, market integrity, and systemic risk reduction without imposing duplicative obligations.
We emphasize that in an onchain context, the risks and the opportunities of tokenised investment products are different. While the risks of tokenised assets are covered extensively by the DP, we emphasize that the opportunity provided by onchain settlement networks accessible to anyone in the globe with internet access provides a unique opportunity for the Authority to harness this opportunity to ensure the safe global distribution of such products under the Authority’s prudent supervision.
Q8a: Given the emergence of innovative investment business models, do you consider it necessary to revisit and potentially expand the definition of investment activities under the IBA, to ensure that all (relevant) activities - particularly those arising from new technologies and business structures - are comprehensively regulated?
Yes, it is necessary to revisit and potentially expand the definition of investment activities under the IBA to comprehensively regulate emerging activities driven by new technologies, such as asset tokenisation. Given the rapid evolution of real-world asset (RWA) tokenisation structures including hybrid models blending digital twins, native tokens, and revenue-sharing agreements, we recommend that any future law or amendments adopt a principles-based approach. This would allow flexibility to capture a wide range of innovative structures without prescriptive rigidity, focusing on core principles like investor protection, market integrity, and risk management, while avoiding regulatory gaps or overlaps with frameworks like the DABA.
Q8b: What challenges or benefits could arise from such an expansion, and what would be the impact on the current regulatory framework?
See immediately above.
Q9a: Are there any other third-party service providers that form part of the tokenisation lifecycle (such as oracles, custodians, data providers or technical service providers) that directly or indirectly impact investor protection or market integrity and, in your view, should fall within the Authority's regulatory perimeter?
We do not believe that additional third-party service providers in the tokenisation lifecycle such as oracles, custodians, data providers, or technical service providers, should fall directly within the Authority's regulatory perimeter, even if they indirectly impact investor protection or market integrity. Instead, we recommend that such providers be subject to third-party risk management requirements by registrants under the DABA or IBA. This principles-based approach aligns with strategies adopted by other regulators, avoiding the regulation of technology itself and focusing on parties with direct relationships to users or customers in tokenised investments, thereby promoting innovation while maintaining necessary safeguards. For instance:
10a: Given that tokenised funds currently fall under the IFA and potentially DAIA (rather than DABA), do you think the industry view is that a dual licensing/authorisation requirement creates unnecessary duplication? Please explain your reasoning.
Our view is that a dual licensing/authorisation requirement under the IFA and potentially DAIA or DABA creates unnecessary duplication for registrants engaging in tokenised investments, as it often results in overlapping compliance burdens, such as redundant registration, reporting, governance, and risk management obligations that do not proportionally enhance investor protection or market integrity. This redundancy can increase operational costs and deter innovation, particularly when tokenisation aims to streamline processes like issuance and administration. Instead, the goal of the Authority's regulation should be to ensure a flexible, principles- and outcomes-based approach to tokenised investments that would apply generally to any BMA registrants, prioritizing substance (e.g., actual risks and economic functions) over form (e.g., rigid categorizations based on technology or structure), thereby reducing duplication while maintaining robust safeguards.
We therefore recommend that the Authority implement a common approach to tokenised investments through new legislation or guidelines that apply to any BMA registrant, regardless of which registrant category they initially registered under.
Q10b: Would a single regulatory regime be sufficient to appropriately mitigate all risks associated with tokenised funds? If so, what, in your view, would be the most appropriate regime and why?
Yes. See immediately above.
Q11a: What legal certainty is currently lacking within Bermuda’s frameworks (e.g., IFA, IBA, FAPBA, DABA, DAIA, etc.) to adequately support tokenised fund operations and accommodate the features unique to DLT, such as the use of DLT-based registers as official registers for tokenised funds?
While Bermuda’s frameworks, including the IFA, IBA, FAPBA, DABA, and DAIA, offer a robust foundation for tokenised fund operations, certain legal certainties are lacking to fully support DLT-unique features and promote innovation. The Authority should ensure:
Q11b: Where a DLT-based register serves as an official register for tokenised funds, what approaches are suitable for rectifying erroneous information?
See immediately above.
Q12a: Are current governance frameworks, roles and responsibilities of fund managers, administrators, custodians and distributors adequate for tokenised funds? Please explain any specific limitations or challenges in applying traditional fund governance models to tokenised funds.
Current governance frameworks under DABA, the IFA and related acts provide a solid foundation for roles like fund managers, administrators, custodians, and distributors, emphasizing accountability and investor safeguards. However, they may not fully address DLT-specific challenges in tokenised funds, such as automated smart contract execution potentially reducing human oversight, real-time onchain transparency conflicting with traditional periodic reporting, or interoperability issues between legacy systems and blockchain. Limitations include inadequate provisions for managing oracle dependencies or smart contract audits, which could expose funds to technological failures. To mitigate, principles-based enhancements could incorporate DLT competency requirements for key personnel and flexible oversight models that adapt to tokenisation's programmable nature.
Q12b: Are traditional distinctions between fund service providers (such as between custodians and administrators) still relevant in the context of tokenised funds, or is there a need for more collaborative or converged models? Please provide examples of how these roles might evolve.
Traditional distinctions between fund service providers remain relevant for maintaining segregation of duties and risk mitigation in tokenised funds, but tokenisation's integrated DLT ecosystems may necessitate more collaborative or converged models. For example, custodians could evolve to handle onchain wallet management alongside administrators' roles in NAV calculations via smart contracts, fostering hybrid arrangements where providers share data feeds through secure APIs. This evolution could include converged platforms where a single entity oversees both custody and administration under unified governance, reducing silos while preserving accountability, as seen in emerging DeFi protocols that blend these functions programmatically.
Q12c: What adaptations or enhancements to existing governance frameworks should be made to ensure investor protection in tokenised funds, particularly regarding oversight of specialised third-party service providers (e.g., custodians, exchanges, smart contract developers, oracle providers and token issuance platforms)?
To ensure investor protection in tokenised funds, existing governance frameworks should be adapted with enhancements like mandatory due diligence protocols for specialised third-party service providers (e.g., requiring independent audits for smart contract developers and oracle providers, or service-level agreements for token issuance platforms). Additional measures could include tailored disclosure requirements on third-party risks, integration of onchain monitoring tools for real-time oversight (e.g., with respect to asset-liability alignment), and principles-based standards for board-level review of DLT dependencies, aligning with the Authority's focus on outcomes like resilience and transparency without stifling innovation.
Q12d: What are your views on the role of, or relationship to, self-custody arrangements, such as those undertaken by fund managers, in the context of tokenisation? Should self-custody be permissible? If so, should this be allowed in general or should it depend on the novelty and technical attributes of the underlying asset being custodied? If the latter, please provide further details.
Enabling the use of self-custody arrangements, including and especially through the use of vault protocols, should be recognized by the Authority, as they leverage immutable smart contracts to automate asset management while preserving user control and transparency. For instance, protocols like the Plume-developed Nest vault protocol offer robust security features through regular third-party audits, non-custodial vault structures, hardware-backed private keys with multi-party quorums for administrative functions, and sequencer-level AML controls that integrate KYC/AML verification, sanctions screening, and compliance tooling to mitigate risks while facilitating secure RWA tokenisation and yield generation.
Vault protocols, such as those implemented in the Nest ecosystem, function as decentralized, onchain financial structures governed by immutable smart contracts that enable users including BMA registrants to create onchain tokenised investments. The Nest protocol allows users to deposit stablecoins (e.g., USDC or pUSD) into digital vaults created by BMA registrants and to receive vault tokens that represent proportional shares in a curated pool of tokenised RWAs like tokenised shares of treasuries, bonds, private credit, ETFs, real estate, or commodities, etc. These vaults automatically deploy assets into yield-generating strategies, continuously rebalance for optimal risk-adjusted returns, and manage liquidity through mechanisms like liquid reserves, automated liquidations, and staggered redemptions. The non-custodial design of Nest and other vault protocol onchain infrastructure aligns with self-custody principles by allowing tokenised fund managers to maintain direct but limited control over assets without intermediaries.
Q13a: What technological and operational safeguards that are specific to tokenised fund structures should be implemented to protect investors?
Technological and operational safeguards specific to tokenised fund structures should prioritize resilience, transparency, and investor protection while fostering innovation. Key measures include: (i) mandatory independent third-party audits of smart contracts to verify immutability, security, and vulnerability resistance, conducted regularly (e.g., annually or upon upgrades); (ii) implementation of multi-signature (multi-sig) wallets and hardware-backed private keys with quorum-based approvals to enforce segregation of duties and prevent unilateral access or insider threats; (iii) built-in AML/CFT/sanctions screening at the transaction level, using tools like on-chain analytics (e.g., TRM or Chainalysis) to block illicit wallets and ensure compliance; (iv) liquidity management practices, such as liquid reserves, automated liquidations, and staggered redemptions; and (v) emergency mechanisms like pause functions.
Q13b: How should disclosure standards address the unique risks of potential discrepancies between offchain documentation (e.g., offering documents) and onchain encoded terms?
Disclosure standards should explicitly address discrepancies between offchain documentation (e.g., offering memoranda, prospectuses) and onchain encoded terms. These standards could include: (i) requiring issuers to provide side-by-side comparisons of offchain legal terms and onchain smart contract logic in plain language, highlighting any potential variances; (ii) regular (e.g., quarterly) audits or attestations by independent third parties to verify synchronization between the two, with results publicly disclosed; (iii) risk warnings in offering documents related to the tokenised product; and (iv) use of standardized templates to embed hyperlinks or verifiable references linking offchain docs to onchain code, e.g., with respect to asset-liability alignment.
Q14: Are traditional investor rights, such as voting, access to information and complaint procedures, adequately preserved in tokenised fund structures? If not, to what extent should they be adapted to address the unique features of tokenised environments?
Traditional investor rights, such as voting, access to information, and complaint procedures, may not be adequately preserved in tokenised fund structures due to the programmable and automated nature of DLT, which can limit direct enforceability of offchain rights or introduce onchain constraints (e.g., smart contract immutability potentially restricting amendments to voting mechanisms or information access).
For instance, in hybrid models, token holders might have indirect exposure to underlying assets without full pass-through of rights. To address these unique features, adaptations should include a principles-based framework emphasizing transparency and proportionality, such as mandatory disclosures on whether and under what conditions a tokenised investment product confers traditional investment rights. For example, when a BMA-regulated Incorporated Segregated Account (ISA) is used as a feeder fund for an onchain or offchain U.S. investment product, where master fund shares may include rights like voting on fund matters or access to detailed financials, if those rights are limited in the ISA feeder fund (e.g., due to segregation or DLT restrictions), the limitations should be explicitly disclosed in offering documents and on-chain metadata. This could be supported by disclosure requirements.
Q15a: Are there any specific challenges associated with the valuation of tokenised funds? For example, are there concerns about discrepancies between offchain and onchain systems or NAV calculation for hybrid or digital asset funds? Please describe these challenges and their potential impact on fund operations and investor protection.
These risks are significantly mitigated by the inherent transparency of DLT-based systems and the dynamics of NAV arbitrage trading. DLT's real-time, immutable, and publicly verifiable ledger allows for continuous on-chain monitoring of asset holdings, transactions, and NAV computations, reducing opacity and enabling instant audits that minimize errors or manipulations in valuation.
Additionally, NAV arbitrage trading facilitated by the low frictions to trade in onchain assets and the transparency of DLT-based markets, where market participants can buy/sell tokens at secondary market prices diverging from NAVm creates natural incentives for price convergence, as arbitrageurs exploit discrepancies to profit, thereby enhancing liquidity and fair valuation, limiting the need for prescriptive regulation in this area.
Q15b: What specific challenges exist in liquidity management for tokenised funds, particularly regarding the potential disconnect between token tradability and underlying asset liquidity? How might these challenges affect investor expectations and fund stability?
We recommend that tokenised investment funds with illiquid underliers be subject to liquidity risk management requirements, as illiquidity can exacerbate valuation uncertainties during market stress or redemption pressures, potentially leading to NAV distortions or unfair investor outcomes. This approach aligns with principles-based regulation under the IFA, promoting resilience by requiring funds to proactively assess and mitigate risks associated with asset convertibility, thereby enhancing overall market integrity and investor confidence without unduly burdening innovative structures.
Drawing from established frameworks like SEC Rule 22e-4 under the Investment Company Act of 1940, high-level requirements could include: (i) adopting and implementing a written liquidity risk management program reasonably designed to assess, manage, and periodically review the fund's liquidity risk; (ii) classifying portfolio investments into liquidity categories (e.g., highly liquid, moderately liquid, less liquid, and illiquid) based on the time to convert to cash without significant value impact, with reviews at least monthly or more frequently in response to market changes; and (iii) establishing board oversight and reporting mechanisms to ensure ongoing compliance and transparency.
Q15d: How might the challenges identified in the valuation, liquidity management and reporting of tokenised funds be effectively addressed through regulatory guidance, industry standards or technological solutions?
The challenges in valuation, liquidity management, and reporting for tokenised funds can be effectively addressed through a combination of regulatory guidance, industry standards, and technological solutions that prioritize principles-based flexibility and innovation. For valuation, regulatory guidance could [...]. With respect to liquidity management, tokenised investment funds with illiquid underliers should adopt SEC-inspired requirements under Rule 22e-4, such as classifying assets by liquidity buckets, board-level review, and relevant disclosures. For reporting, challenges in reconciling onchain and offchain data could be mitigated via guidance requiring attestations and verifiable links between onchain and onchain and offchain ledgers and data sources, with guidance incorporating best practices and the current state of vendor solutions. Overall, a collaborative, principles- and outcomes-based approach, including regulatory sandboxes for testing, would balance investor protection with DLT's efficiencies.
Q32: How should regulatory frameworks address the differing risks and operational characteristics of permissionless and permissioned blockchains, particularly with respect to clearing, settlement and custody mechanisms?
For permissionless blockchains, which are inherently decentralized with open participation and consensus-driven validation, regulators can leverage this structure to mitigate risks in clearing, settlement, and custody: decentralization distributes control across numerous nodes, reducing single points of failure, enhancing resilience against attacks, and providing transparent, immutable ledgers for auditability. Frameworks could require minimum decentralization thresholds, such as node diversity, geographic distribution, and validator independence, to ensure robust consensus, while mandating regular third-party audits of network health and smart contract code to verify operational integrity without stifling innovation. Notably, sequencers in layer 2 solutions (such as the Plume RWAfi blockchain), which perform entirely automated transaction ordering functions, align with decentralization goals by operating without centralized human intervention, supporting the overall safety and soundness of the network.
In contrast, permissioned blockchains, with centralized governance and restricted access, pose higher risks of operational silos or intermediary failures, necessitating stricter entity-level oversight.
Q33: Do you think that DABA would be better suited to address the prudential and operational risks associated with tokenised assets on blockchain infrastructure? If not, what alternative approaches should be considered?
Yes, we believe DABA is well-suited to address the prudential and operational risks associated with tokenised assets on blockchain infrastructure. As a principles-based regime, DABA provides the flexibility to adapt to the unique risks of blockchain-based tokenisation, such as smart contract vulnerabilities, network resilience, and settlement finality, while emphasizing outcomes like investor protection, market integrity, and systemic stability.
This approach avoids the need for entirely new frameworks, leveraging DABA's existing adaptability to cover both permissionless and permissioned blockchains through risk-based enhancements, such as integrating third-party risk management for oracles and bridges. By building on DABA, the Authority can foster innovation in RWA tokenisation while ensuring robust safeguards, consistent with Bermuda's leadership in digital assets.
Q34: Do you agree that the convergence of functionality at the platform layer - encompassing custody, client-asset segregation, operational resilience, conflicts of interest and other risks - necessitates a shift from product-specific regulatory frameworks toward a more activity- and platform-centric approach?
Yes, we agree that the convergence of functionalities at the platform layer, such as custody, client-asset segregation, operational resilience, and conflicts of interest, necessitates a shift from product-specific regulatory frameworks to a more activity- and platform-centric approach. This convergence, driven by blockchain's programmable nature, blurs traditional silos, making holistic oversight more efficient for addressing interconnected risks without stifling innovation. A platform-centric model under DABA (as discussed in Q33) could apply uniform standards to core activities like onchain asset management and smart contract governance, ensuring outcomes like resilience and integrity across integrated ecosystems.
Plume's Nest platform serves as an example of why this shift is appropriate. As a DeFi yield aggregation protocol, Nest converges multiple functions at the platform level: users deposit stablecoins into automated, non-custodial vaults governed by immutable smart contracts, which handle custody through onchain segregation (e.g., proportional vault token shares), operational resilience via auto-rebalancing and liquidity mechanisms (e.g., reserves and staggered redemptions), and conflict mitigation by eliminating intermediaries. Regulating Nest on a per-product basis (e.g., treating each vault's RWA basket separately) would create redundant compliance burdens, whereas a platform-centric approach focuses on overarching safeguards like standardized and robust disclosures that equip investors with sufficient information to assess material risks and investment opportunities, third-party audits of smart contracts and decentralization thresholds, better aligning with its integrated, automated design to promote scalability and investor protection.
We note, however, in order to maintain compatibility with the global nature of onchain capital markets a platform model for regulation should be deployed in a manner consistent with the reach of the platform. For example, the regulation of products (tokenized assets, crypto assets, etc.) should reflect the global distribution of these assets. The regulation of the user interface component of a platform, particularly non-custodial DeFi interface service providers, on the hand should be subjected to regulations on a per jurisdiction basis. DeFi interface service providers are the layer of the DeFi supply chain where national-level disclosure, suitability, and other investor-level policy goals can be properly effectuated.
Q35: Do you think DABA is an appropriate framework to address these emerging risks at the platform level? If not, what alternative approaches could better balance the need for platform-level supervision with product-specific rules to address residual risks?
Yes, we believe DABA is an appropriate framework to address emerging risks at the platform level, as its principles-based structure allows for adaptable, holistic supervision that captures converged functionalities (e.g., custody, resilience, and conflicts) without prescriptive rigidity. By enhancing DABA with platform-centric elements—such as risk-based assessments of decentralization (per Q32) and activity-focused standards—residual product-specific risks can be balanced through targeted guidance, like disclosures for hybrid tokens or liquidity rules for illiquid RWAs. This avoids fragmentation, as seen in platforms like Nest (Q34), where integrated DeFi operations benefit from unified oversight rather than layered product rules, ensuring innovation while mitigating systemic threats.
Q36: How can embedded AML/ATF features in token standards (e.g., transfer restrictions, whitelisting) impact compliance efficiency? What specific features or functionalities do you consider most valuable for monitoring and overseeing compliance in tokenised transactions, and what tokenisation standards are best suited to embed these features while balancing market efficiency and secondary trading?
Embedded AML/ATF features in token standards—such as transfer restrictions, allowlisting/blocklisting, freeze/seize functions, and onchain compliance hooks—significantly enhance compliance efficiency by automating real-time enforcement at the protocol level, reducing reliance on manual post-transaction monitoring, and enabling rapid response to sanctions or suspicious activity.
The most valuable features for monitoring and overseeing compliance in tokenised transactions are:
Standards best suited to embed these features while preserving market efficiency and secondary trading are modular, EVM-compatible extensions to ERC-20 (e.g., ERC-1404, ERC-3643, or custom compliance modules) that make restrictions optional, configurable by the issuer, and bypassable for compliant transfers. These avoid the rigidity of full security-token standards (e.g., ERC-1400) that can impair composability and DeFi integration, while still permitting seamless secondary trading among whitelisted or screened addresses.
Q37: What challenges do you foresee in aligning AML/ATF obligations across multiple regimes? How should AML/ATF obligations interface between different regimes?
Major challenges in aligning AML/ATF obligations across regimes include regulatory fragmentation, conflicting travel rule and data-privacy requirements, inconsistent sanctions enforcement, and disproportionate compliance burdens on non-custodial platforms that structurally eliminate many traditional VASP risks (e.g., no client key control, no custodial wallet provision, no ability to unilaterally reverse transactions).
AML/ATF obligations should interface through a principles-based, outcomes-focused approach: mutual recognition of equivalent regimes (e.g., Bermuda DABA equivalence with EU MiCAR or U.S. frameworks), risk-based deference to the home regulator for supervised entities, and mandatory use of interoperable onchain tools (e.g., automated sanctions screening, real-time proofs of compliance, and freeze/seize capabilities under lawful order) to achieve consistent results without prescriptive uniformity.
We commend the Authority’s leadership in Digital Identity Service Provider regulation. We would encourage the Authority to implement it in conjunction with implementation timelines of yet-to-be-developed global standards. If the Authority were to move forward with it in the interim, it should commence with traditional financial services licensees and centralized exchanges before moving to DeFi web and application interface service providers that offer Bermudan residents access to digital assets available through DeFi-based systems when and if a global consensus to regulate such platforms emerges.
Q38a: What role could digital identity systems play in strengthening AML/ATF compliance within tokenised ecosystems, particularly in enhancing identity verification and reducing the duplication of KYC processes across stakeholders?
Digital identity systems, particularly decentralised (DID) or self-sovereign identity frameworks using verifiable credentials (VCs), could significantly strengthen AML/ATF compliance in tokenised ecosystems by enabling reusable, cryptographically verifiable identity attestations that reduce redundant KYC across issuers, platforms, and custodians while enhancing privacy and user control. In non-custodial ecosystems like Plume’s, DID/VCs allow optional, issuer-specific identity gating (e.g., for restricted vaults or institutional on-ramps) without imposing blanket KYC on permissionless users, thereby minimising duplication and friction while achieving risk-based compliance outcomes.
Q38b: Are current DLT-agnostic identity attestation solutions sufficient to meet Bermuda's AML/ATF requirements and international standards? If not, what specific enhancements or alternative approaches would be necessary?
Current DLT-agnostic identity attestation solutions (e.g., reusable verifiable credentials or third-party KYC attestations) are partially sufficient to meet Bermuda’s risk-based AML/ATF requirements under the Proceeds of Crime (Anti-Money Laundering and Anti-Terrorist Financing) Regulations 2008, particularly Regulation 5 (customer due diligence measures), Regulation 7 (ongoing monitoring), Regulation 11 (enhanced due diligence), and Regulation 14 (reliance on third parties), aligned with FATF Recommendations 10 (customer due diligence) and 16 (wire transfers). They qualify provided the attester is a regulated entity eligible for reliance, with immediate access to underlying records and the relying licensee retaining full responsibility.
Emerging onchain technologies—such as Plume’s Nest protocol and app-level advanced AML screening using TRM Labs datasets—enable sub-second, automated real-time compliance and ongoing monitoring that is faster, cheaper, and more scalable than traditional offchain methods, while preserving permissionless access and innovation in non-custodial ecosystems.
Q40: How do you perceive the potential for developing a unified AML/ATF onboarding process, such as a 'passportable' KYC system, that could be acknowledged across multiple regulatory frameworks to enhance and simplify compliance efforts?
Embedded AML/ATF features in token standards, such as transfer restrictions and whitelisting, can significantly enhance compliance efficiency by automating enforcement at the protocol level, reducing manual oversight, minimizing human error, and enabling real-time monitoring of transactions. For instance, these features allow for programmable controls that flag or block suspicious activities based on predefined rules (e.g., sanctions or illicit finance lists or transaction patterns), streamlining reporting and response times while lowering operational costs for issuers and platforms.
The most valuable features for monitoring and overseeing compliance in tokenised transactions include "freeze and seize" capabilities, which enable issuers to immobilize or confiscate tokens in response to lawful orders, addressing illicit finance risks without disrupting the broader ecosystem. This approach is exemplified by the U.S. GENIUS Act (Guiding and Establishing National Innovation for U.S. Stablecoins Act, enacted July 2025), where freeze and seize forms the core of stablecoin AML enforcement, requiring issuers to implement technical abilities to block, freeze, or burn tokens for violations of BSA/AML requirements or sanctions. We recommend adopting this as a template for tokenised RWAs more broadly, with issuers complying with similar mandates to ensure enforceability against money laundering or terrorism financing, integrated via smart contract admin functions audited for security.
Tokenisation standards like ERC-3643 (for security tokens) or emerging RWA-specific extensions are best suited to embed these features, as they support permissioned transfers, role-based access (e.g., for regulators), and onchain attestations while preserving market efficiency through secondary trading compatibility.
Without a standardized, globally interoperable system, potentially leveraging decentralized identifiers (DIDs) or blockchain-based verifiable credentials, mandatory universal KYC, especially for non-Bermuda-based IP address users, will limit the reach of global onchain capital markets and therefore the opportunity presented by tokenised assets for Bermuda. This fragmentation not only increases operational costs for issuers and platforms but also hinders market efficiency, as users in one jurisdiction may face hurdles incompatible with another's rules, potentially stifling the borderless potential of onchain finance. Moreover, mandatory KYC undermines accessibility, particularly for the estimated 1.4 billion unbanked individuals globally (many in developing regions like sub-Saharan Africa and South Asia), who often lack traditional identification documents but could benefit from digital alternatives. Requiring full KYC upfront risks excluding these populations from tokenised real-world assets (RWAs), perpetuating financial inequality and limiting the democratizing promise of DeFi and onchain capital markets.
To mitigate these issues, mandatory universal KYC for BMA-regulated tokenised asset service providers through traditional approaches should be carefully calibrated against risks and opportunities, as well as uniquely onchain risk mitigants as described below. Once implemented, it should be done in conjunction with a digital identity regime but only for jurisdictions mandating it for DeFi interface service providers. The application of such requirements should be applied on a risk-based tiered approach, such as for high-value transactions, institutional participants, or flagged suspicious activities.
In the interim, AML safeguards can be maintained through alternative, technology-native measures: onchain analytics tools (e.g., from providers like Chainalysis, Elliptic, TRM Labs, etc.) for transaction monitoring and pattern detection; programmable token standards with embedded "freeze and seize" capabilities, as modeled in the U.S. GENIUS Act for stablecoins (and extended to other RWAs); and sequencer-level AML and/or sanctions compliance in Layer-2 networks to automate sanctions screening without centralized intervention. Such an approach ensures robust protection against money laundering and terrorism financing while fostering inclusive growth, aligning with Bermuda's principles-based regime to position it as a hub for innovative, secure global onchain markets.
Q44: What investor education requirements or initiatives should be implemented to ensure that retail investors understand the risks specific to tokenised products?
To ensure retail investors understand the risks specific to tokenised products such as smart contract vulnerabilities, settlement finality uncertainties, credit risks in digital twins, and liquidity mismatches with underlying assets, regulatory frameworks should implement a multi-tiered investor education strategy under a principles-based approach. This could include:
Plume would welcome the opportunity to collaborate with the BMA on developing a standardized educational portal for users of RWA products or platforms regulated by the Authority.
Q46: Given that the initial tokenisation phase operates primarily offchain, what cybersecurity controls should be applied to the interfaces between external validators (lawyers, auditors, asset valuators) and tokenisation platforms?
To address cybersecurity risks at the interfaces between external validators (e.g., lawyers, auditors, asset verifiers) and tokenisation platforms, we recommend a principles-based framework emphasizing secure data exchange, access controls, and ongoing monitoring. This aligns with Bermuda's adaptive regime, focusing on outcomes like data integrity and resilience without prescriptive mandates that could hinder innovation.
Key controls should include:
These controls can be embedded in licensing requirements under DABA and related acts.
Q49: Do you agree that minting should be conditional upon receiving an auditable 1:1 verification from an independent, third-party custodian? Why or why not?
No, we do not agree that minting should be conditional upon receiving an auditable 1:1 verification from an independent, third-party custodian, as this prescriptive requirement could impose undue operational burdens, increase costs, and hinder the efficiency of tokenisation processes without proportionally enhancing safety or investor protection. Instead, a principles-based approach under DABA can ensure 1:1 asset-liability alignment through onchain transparency for the liability component (e.g., immutable records of token issuance and holder claims via smart contracts, verifiable in real-time by regulators and users) combined with robust offchain regulatory reports and disclosures for the asset side (e.g., periodic attestations of backing assets by issuers, supported by audited financials and asset custody confirmations, as well as the involvement of independent asset verifiers such as Bluprynt).
Q50: Given that unauthorised mints typically result from compromised operator keys or internal collusion, what are the most effective approaches to secure minting authority keys, including the use of multi-signature schemes, hardware-backed solutions (e.g. HSMs/MPC) and time-lock mechanisms?
Unauthorized mints don’t result from compromised keys or collusion alone, they occur because of a wide range of reasons. Because of that, multiple risk mitigating approaches are necessary to control for potentially compromised social and technological factors. The most effective approach is a layered, risk-tiered multisig system combined with hardware isolation and mandatory delay mechanisms.
This combination of policies has proven resilient in production systems like Plume and others.
Q51: What practices and standards should be followed to ensure the security and reliability of smart contracts, including independent audits and measures like reproducible builds and pinned dependencies? Are these sufficient to mitigate risks such as code vulnerabilities and supply chain attacks?
Core practices and standards we recommend are:
These controls are highly effective against code vulnerabilities and supply chain attacks when combined with immutable code logic. No single audit round is “sufficient” in isolation and an in-depth defense model is necessary.
Q52a: Do you agree that upgradable smart contracts should be subject to stricter governance controls, such as requiring multi-signature wallets with time-locks for upgrades? Please explain your reasoning.
Yes, we strongly agree. However, not all component parts of onchain protocols are required to be immutable. While upgradable contracts are the primary admin risk surface in production systems, discretion must be applied to not be overly prescriptive at regulator level.
Immutable smart contracts are preferable when feasible for long-lived and simple parts of the system. One example is onchain custody, which in itself is a simple smart contract which can be reasonably immutable for a long time.
Immutability has its own tradeoffs, one of which is the threat actor’s ability to use future technologies to break old protocol math by exploiting functions that are patched in newer versions of EVM and Solidity.
Regulators must not be prescriptive in which protocol areas immutability should be required and instead trust highly technical teams to make those discretionary decisions.
Q52b: What additional measures can help address the risks associated with admin keyholders and protect against insider threats in tokenisation systems? Please provide specific examples of effective controls or best practices.
Best practices and effective controls include:
These controls collectively reduce insider risk to near-zero while preserving operational ability.
Q53a: What is your opinion on implementing circuit-breaker mechanisms to pause critical contract functionality during active attacks? Please address both the potential benefits and risks of such mechanisms.
Circuit-breaker mechanisms (pause functions that activate automatically) are a prudent last-line defence in smart-contract systems, particularly for non-custodial RWA vaults and stablecoin issuers, provided they are narrowly scoped, transparently documented, and designed to preserve liveness guarantees.
Automatic circuit-breakers (onchain parameter-bound triggers such as oracle deviation caps or slippage limits):
Manual circuit-breakers (multisig-activated, informed by real-time monitoring):
Best implementation (as deployed in Plume’s Nest and pUSD vaults): 4-of-6 ownership multisig with 48-hour timelock for critical changes, scope limited to non-critical paths (pause new deposits/mints while keeping withdrawals/redemptions open), guaranteed user self-redemption, and full onchain transparency of all pause events
Q53b: Beyond circuit-breakers, what other incident response controls should be considered to mitigate the potential impact of exploits in real-time? Please provide specific examples of effective controls and their implementation considerations.
Effective real-time mitigation requires moving from reactive alerting to proactive pre-execution filtering. For Layer-2 networks like Plume, this is implemented directly at the sequencer level to block malicious transactions before finalisation. Key controls include:
1. Sequencer-Level Firewalls (Pre-Execution Blocking) Instead of waiting for a transaction to land onchain, the sequencer should utilize a "firewall module" that screens transactions against dynamic threat intelligence.
2. Transaction Simulation & Invariant Checks Before including a transaction in a block, the network can simulate its outcome against the current state.
3. Heuristic & behavioral anomaly detection Implementing deterministic rules that flag non-standard behaviors indicative of probing or attacks:
4. Bridge & cash-out monitoring Implementing "slow lanes" or withdrawal delays for transactions routing large value to known high-risk bridges or fresh addresses with no prior history. This delay allows for manual intervention if the withdrawal is flagged as part of a potential cross-chain exploit.
Implementation considerations: Controls must be transparent (onchain logs of rejections), configurable to minimise false positives, regularly validated by third parties (e.g., Hacken), and proportionate to preserve permissionless access.
Q54a: What are the most effective mechanisms to ensure fairness during token issuance events, such as batch auctions or other launch methodologies?
The most effective mechanisms for fairness in token issuance events depend on the token’s nature (e.g., utility vs. security-like RWA) but prioritise equal access, manipulation resistance, and transparency while preserving efficiency in permissionless environments.
Platforms like Plume’s open-source Arc automate fair deployment workflows (batch mechanisms, cap enforcement, verifiable onchain logic) while connecting issuers to compliant tools without requiring centralised control.
Q54b: Are additional measures needed to protect retail participants from the exploitation of DLT transparency by bots?
Yes, targeted measures can protect retail participants from bot exploitation of blockchain transparency (e.g., front-running, sandwich attacks, sniper bots) without eroding core DeFi benefits like permissionless access and onchain verifiability. These should be encouraged on a principles-based, proportionate basis rather than uniformly mandated, allowing issuers to tailor controls to risk profiles.
Effective mechanisms include:
Plume evaluates and selectively deploys such tools at the sequencer level to balance retail protection with liveness.
Q55a: Given that user losses during token launches often result from phishing and social engineering attacks, what safeguards should be in place to strengthen user protection?
Phishing and social engineering are one of the primary loss vectors in token launches, so safeguards should prioritise verifiable communication integrity, official channel authentication, and user education without imposing disproportionate burdens on issuers.
Q55b: How effective are controls like digitally signed announcements and user education campaigns in mitigating these risks?
Digitally signed announcements are highly effective when wallets natively verify signatures – they prevent impersonation at scale and give users immediate confidence. Education campaigns provide supplementary value (e.g., teaching signature checking and domain verification) but have limited impact on determined phishing victims. Technical controls (signed messages, domain protections, wallet warnings) remain the primary and most reliable defence; education reinforces but cannot replace them.
Q56a: What technical controls, such as Domain Name System Security Extensions (or other measures), should tokenisation projects implement to prevent domain hijacking and protect end-users from being redirected to malicious sites during high profile events?
Domain hijacking and phishing sites are among the most common attack vectors during high-profile token launches and product announcements, often causing significant retail user losses. Tokenization projects should implement a defence-in-depth approach focused on prevention, detection, and rapid response.
Recommended technical controls:
Plume implements all of the above across its primary domains and launch infrastructure, and we strongly recommend these as baseline requirements for any regulated tokenisation activity involving retail participation.
Q56b: Are there additional controls that should be considered to secure communication channels?
Yes – community communication channels (especially Discord and Telegram, the primary venues for real-time user interaction in tokenisation projects) are high-value targets for impersonation, raid attacks, and coordinated phishing. Beyond domain and API protections, the following controls are essential:
Q57a: What due diligence standards and risk management practices should be adopted when integrating tokens with external DeFi protocols?
Particularly for RWA tokenization protocols, integrations into external DeFi protocols can introduce material third-party risk, including technical, operational, and exposure to illicit finance. On Plume, illicit finance exposure is mitigated by the sequencer AML/ATF tech itself, so all protocols deploying on Plume’s own blockchain benefit from that out of the box.
On other chains, due diligence must be comprehensive, ongoing, and should explicitly cover AML/ATF and sanctions compliance, as many DeFi protocols lack regulated safeguards.
Key due diligence standards:
Risk management practices:
All of these ensure full accountability without blocking beneficial composability.
Q57b: Would maintaining an onchain allowlist of approved integrations and setting exposure caps meaningfully mitigate the risks of contagion from third-party failures?
Onchain allow-lists and exposure caps can mitigate contagion risks by restricting interactions to vetted protocols and limiting TVL concentration in any single integration. However, rigid global allow-lists introduce meaningful centralisation risk: a single governance process (or admin key set) becomes the gatekeeper for all composability, creating a potential single point of failure, censorship vector, or regulatory bottleneck that can in itself bring commercial failure.
A more balanced approach, successfully demonstrated by Uniswap v4, is modular, per-instance hooks that enable permissioned logic (including integration restrictions and exposure limits) on a pool-by-pool or vault-by-vault basis without imposing global centralisation. This preserves core protocol neutrality and open composability while allowing regulated or risk-averse deployments (such as RWA vaults) to apply tailored safeguards where needed.
Nest vaults follow a similar philosophy: risk controls are applied at the individual vault level rather than globally, ensuring contagion containment without sacrificing the broader ecosystem’s decentralised nature. Regulators should encourage this modular pattern over blanket allow-list mandates, as it achieves risk mitigation with fewer centralisation trade-offs.
Q58: Given the significant financial losses caused by bridge exploits, what criteria should be used to evaluate and select secure cross-chain bridges? Are long operational histories, multiple independent audits and transparent teams sufficient, or are additional compensating controls needed?
Selection criteria must prioritise proven cryptographic and economic security over marketing claims. Core evaluation criteria:
Long history, audits, and transparency are necessary but not sufficient—many exploited bridges met these thresholds yet failed due to single-point dependencies or insufficient economic deterrence.
Additional compensating controls required:
Plume selected LayerZero for omnichain RWA connectivity after rigorous evaluation against these criteria, combined with vault-level rate limits and monitoring, delivering secure cross-chain yield access without custodial exposure. Regulators should encourage this layered approach rather than relying solely on historical or audit-based signals.
Q59: What are the best practices for mitigating oracle risks, such as data manipulation or inaccuracies, during token management? Should decentralised oracle networks, Time-Weighted Average Price (TWAP) mechanisms and circuit breakers be considered standard controls, or are there other measures the industry should prioritise?
Best practices require defence-in-depth rather than reliance on any single control. Recommended controls (all implemented in Nest vault pricing logic):
Decentralised networks, TWAP, and circuit-breakers should be considered baseline standards for any material pricing dependency, especially in regulated RWA tokenisation. However, the industry should prioritise modular, configurable designs that allow issuers to layer additional sources and thresholds without prescriptive mandates.
Over-standardisation risks forcing suboptimal configurations; principles-based oversight focused on outcomes (accurate, manipulation-resistant pricing with documented fallbacks) better supports innovation while maintaining robustness.
Q60a: What measures should be adopted to ensure the reliability and robustness of redemption mechanisms, such as protections against Denial of Service (DoS) attacks?
DDoS attacks are more applicable to centralised systems that rely on servers controlling functions. Redemption mechanisms in non-custodial RWA vaults must guarantee liveness and fairness under all conditions, including panic runs or coordinated abuse. Fully onchain designs should ultimately support high (and ideally instant) liquidity, treating rapid large-scale drains as valuable real-time stress tests of reserve depth, market resilience, and secondary liquidity.
Current best practices – which Nest vaults implement in production – use layered, transitional controls to bridge the gap while secondary markets mature:
These controls ensure orderly exits during simulated or real stress without permanent friction. As RWA ecosystems deepen (through standardised pools and incentivised liquidity), throttling and caps can be phased down, enabling true instant liquidity. As RWA ecosystems mature, instant redemption at scale becomes the default without needing additional friction. Regulation should stay principles-based, rewarding resilient, high-liquidity designs rather than mandating gates.
Q60b: Would implementing features like queue throttling and per-address caps be effective in addressing these risks?
Yes, some queue throttling and per-address caps can be effective transitional safeguards against panic runs, coordinated abuse, or liquidity shocks that preserve fairness without centralized intervention. The downside is that they introduce friction and should not be permanent in fully onchain non-custodial systems.
Onchain non-custodial protocols should target instant, unrestricted redemptions as the end-state – rapid large-scale drains serve as healthy real-time stress tests of reserve composition, secondary market depth, and overall system resilience. Achieving this requires ecosystem maturation: deeper standardised liquidity pools, incentivised provision, and tighter onchain/offchain settlement alignment.
Q61a: Do you think that cryptographic attestations confirming the release of offchain assets could serve as an effective mitigating safeguard when integrated with onchain redemption processes?
Cryptographic attestation (signed messages or proofs from offchain custodians/verifiers) can offer partial mitigation in hybrid models by creating an auditable link between offchain asset/liquidity release and onchain redemption triggers. However they are limited because of persistent counterparty risk - the system still depends on the asserter’s integrity and operational timeliness. Models that push data onchain are better than those that pull.
Fully onchain non-custodial models will render attestations unnecessary, on long enough timelines. When assets are held directly in audited smart contracts, and redemptions are instant and atomic, the transparency becomes real-time and verifiable by anyone. For maximal robustness and minimal systemic risk, frameworks should prioritise native onchain collateralisation over attested hybrids. Principles-based guidance can encourage this shift.
Q61b: What additional controls or alternative approaches should be considered to address counterparty insolvency risks in tokenisation frameworks?
Counterparty insolvency risk is inherent to hybrid/digital-twin models where backing assets sit offchain with custodians or issuers. The most effective long-term mitigation is structural elimination of the counterparty through fully onchain, non-custodial collateralisation through vault protocols like Nest using underliers that are themselves fully onchain and represent direct ownership in the asset (as opposed to tokenised entitlements to offchain assets).
Additional controls for hybrid setups (listed in descending order of effectiveness):
Frameworks should remain principles-based, strongly incentivising native onchain designs (zero counterparty risk) while requiring hybrids to implement proportionate legal, audit, and disclosure safeguards with clear risk labelling.
Q62a: What practices should be adopted to minimise risks associated with token burning, such as flawed burn logic or irreversible mis-burns?
In RWA systems, token burning serves two distinct functions: (1) standard user redemptions (asset exit) and (2) compliance-driven asset seizures ("Forced Burns"). The risks and technical implementations differ fundamentally for each.
1. Standard user redemptions (the "deposit-and-burn" pattern) Users should never manually burn tokens to claim offchain assets, as this risks "irreversible mis-burns" (e.g., sending to the wrong null address). To prevent this:
2. Asset seizure & recovery ("atomic bypass" pattern) A major risk in "Forced Burn" logic is the inability to seize assets from a sanctioned (frozen) wallet because the transfer hook blocks the burn transaction. To resolve this without compromising security, we implement an atomic unblock-seize-reblock pattern (as seen in our NestShareSeizer module):
3. Governance & logic hardening
Q62b: Do you think that using well-tested, audited and standardised burnable contract code is sufficient to address the risks associated with token burning? If not, what alternative or additional controls, processes or practices should entities consider to enhance the security and reliability of token destruction mechanisms?
No. Well-tested, audited, standardised burnable code (e.g., OpenZeppelin ERC-20 extensions) is necessary and eliminates most logic flaws but is not sufficient alone. Deployment errors, governance abuse, upgrade regressions, or unforeseen interactions can still cause irreversible mis-burns or supply manipulation.
Additional controls:
Plume’s Nest and pUSD vaults use these layered controls, reserving admin burns for rare error correction with full onchain transparency.
Q63a: Do you agree with the key risks identified by the Authority, including the focus on people-related risks and common token security attack vectors?
Yes, we agree—the Authority correctly identifies people-related risks (insider threats, key compromise, social engineering) and common technical attack vectors as primary concerns in tokenisation systems. Historical exploits show that human factors and privileged access abuse cause more losses than pure code bugs alone.
We particularly endorse the emphasis on:
That said, the risk profile shifts materially in fully onchain, non-custodial designs: people risks are mitigated through distributed multisig/timelock governance rather than single custodians, while technical vectors are contained via immutable core logic, audited standards, and real-time monitoring.
Regulation should remain principles-based, requiring documented mitigation of these core risks (governance, monitoring, audits) while allowing flexibility for differing architectures rather than prescribing specific controls, particularly since onchain technology and processes to support tokenised assets are still evolving rapidly.
Q63b: Are there additional risks or areas of concern that you believe should be captured as part of this risk assessment?
Yes—the Discussion Paper’s risk assessment would benefit from explicitly addressing risks specific to non-custodial, smart-contract-native RWA tokenisation, which are structurally distinct and generally lower than those in traditional custodial VASP models.
Key additional areas:
In addition to suggestions made further above, e.g., related to disclosure and liquidity risk management, here are some additional risk mitigation concepts:
Q64a: How important do you believe regular tabletop exercises and red-team simulations are for preparing for high-impact scenarios like bridge exploits, oracle manipulation or key compromise?
Regular tabletop exercises have limited value even for large and mature organisations with complex coordination needs. It’s impractical for anyone to take these kinds of precautions as they exist in the realm of theoretical threats.
Theoretical exercises only take preparedness so far. Real-world resilience comes primarily from operating live systems under meaningful TVL: rapid detection, patching, and iteration based on actual anomalies, privileged-call patterns, and near-miss events. In this manner risks are mitigated by being intentional and using automation.
Plume prioritises continuous production hardening, i.e. real-time monitoring, automated anomaly response, and immediate post-incident retros, over scheduled simulations. This approach has delivered zero material incidents despite rapid growth and high TVL.
Both paths are valid depending on maturity and rather than mandating formal exercise frequency, allowing teams to tailor to their operational reality
Q64b: Are there other testing methodologies you know of that could improve readiness?
Yes. While tabletop exercises test human coordination, technical readiness is best improved through deterministic state simulation and continuous invariant testing. We recommend the following high-fidelity methodologies:
1. Mainnet shadow forking: Instead of relying on sterile "staging" environments, mature protocols should test upgrades and incident responses on a "Shadow Fork"—a private, read-only copy of the live mainnet state. This allows teams to simulate the exact impact of an exploit or patch against real user balances and contract states without risking actual funds.
2. Continuous invariant fuzzing: Moving beyond static audits, teams should run continuous "fuzzers" (e.g., Echidna, Medusa) that bombard the protocol with random inputs 24/7 to verify that core invariants (e.g., "solvency > 0," "user balance <= total supply") never break, even under extreme edge cases.
3. Competitive audit contests: Supplementing traditional bug bounties with time-boxed, gamified audit contests (e.g., Code4rena, Sherlock). These incentivize hundreds of security researchers to competitively find vulnerabilities in a specific update within a short window, offering a higher intensity of scrutiny than standard penetration testing.
4. Validator/sequencer "game days": Specifically for L2 networks, conducting live drills on test networks where non-critical infrastructure is intentionally taken offline (e.g., stalling the sequencer, partitioning a validator node) to verify that automated failover systems and redundancy protocols trigger correctly in production.
5. Pre-Flight governance simulation: Mandating that every multisig transaction or governance proposal is simulated against the current block state (using tools like Tenderly) before signing. This verifies the exact state changes and event logs match the intent, catching "silent failures" or malicious payloads before execution.
Q65: Do you agree with the necessity of pre-authorised emergency tools, such as circuit-breakers and kill switches, to contain incidents effectively? If yes, what governance, technical or activation controls should be in place to ensure their secure and responsible use?
We partially agree. Pre-authorised emergency tools like narrowly scoped circuit-breakers can be useful for damage containment in active exploits, but broad “kill switches” introduce excessive centralisation risk and should not be considered necessary in well-designed non-custodial systems.
Effective containment is better achieved through layered, automated defences (real-time monitoring, anomaly detection, onchain bounds) that respond without human intervention or global pauses. Full-system kill switches concentrate power, create censorship vectors, and undermine liveness—contrary to DeFi principles.
Production systems demonstrate that robust monitoring + immutable logic + targeted pauses (when needed) contain incidents without resorting to global off-switches. Frameworks should encourage this proportionate, minimal-intervention approach rather than mandating broad emergency powers.
Q66a: What are your views on implementing detection tools like onchain anomaly monitoring, mempool analytics and external threat intelligence as part of real-time threat monitoring?
We strongly support the implementation of these detection tools—they are essential and represent the most effective approach to real-time threat monitoring in tokenisation systems.
Onchain anomaly monitoring, mempool analytics, and integration with external threat intelligence enable proactive, automated identification of suspicious patterns (e.g., privileged-call abuse, flash-loan prep, bridge drains, or illicit flows) with near-zero latency. This allows rapid response—blocking, alerting, or escalating—before material damage occurs.
For future RWA protocols in particular, continuous behavioural monitoring at protocol and sequencer level is far superior to static upfront controls like KYC or digital identity (DID) solutions. KYC/DID add friction, exclude unbanked users, and provide only point-in-time assurance; they cannot detect sophisticated onchain threats (mixers, chain-hopping, contract exploits) that emerge post-onboarding. Real-time monitoring scales globally, preserves pseudonymity/privacy where appropriate, and adapts to evolving attack vectors without gatekeeping access.
Production systems demonstrate this: sequencer-integrated anomaly detection combined with onchain analytics has prevented incidents at scale without relying on identity layers. Frameworks should prioritise and incentivise these dynamic tools as the primary line of defence for RWA ecosystems.
Q66b: Additionally, are structured processes for migrating to patched contracts and regular audits sufficient for ensuring resilient recovery operations, or are there other recovery practices that firms should consider?
They are necessary but not sufficient on their own. Practices that materially improve recovery are real-time monitoring and automated containment. Chaos engineering and fault injection, as well as live anomaly-driven detection and response.
Q68: What safeguards can be implemented to address the 'dependency risk' created by programmable and composable smart contracts, including preventing cascading vulnerabilities, managing limitless composability risks (such as complex derivatives), and mitigating excessive leverage buildup through rehypothecation of tokenised assets?
Dependency risks are real and can propagate failures but they are manageable without stifling the core innovation of programmable RWAs. Effective safeguards, prioritized by impact are:
These controls contain contagion and leverage buildup while preserving composability benefits. Advanced RWA systems already deploy them with billions of TVL.
Q69: What measures can ensure better alignment between the liquidity of tokenised assets and their underlying reference assets, as well as mitigate timing mismatches between 24/7 token markets and traditional market hours?
To ensure better alignment between the liquidity of tokenised assets and their underlying reference assets—particularly for digital twins where onchain tokens may trade freely while offchain underliers (e.g., real estate or private credit) remain illiquid, we recommend a principles-based regulatory framework under DABA that mandates tailored liquidity risk management programs for issuers and platforms. This would require assessing and disclosing the liquidity profile of underlying assets relative to token tradability, with mechanisms to prevent mismatches that could lead to forced sales or NAV distortions during redemptions.
Drawing from established U.S. SEC Rule 22e-4 under the Investment Company Act of 1940 (as discussed in our responses to Q15b and Q15d), key measures could include: (i) classifying tokenised assets into liquidity buckets (e.g., highly liquid, moderately liquid, less liquid, illiquid) based on conversion time to cash without significant value impact, with monthly reviews to adapt to market conditions; (ii) implementing board-level oversight and reporting to ensure ongoing alignment, such as maintaining liquid reserves or staggered redemption gates for illiquid underliers; and (iii) automated onchain tools, like smart contract-enforced liquidity thresholds or oracle-fed pricing adjustments, to dynamically limit token issuance or trading when mismatches arise.
To mitigate timing mismatches between 24/7 onchain markets and traditional hours (e.g., weekends or holidays when offchain exchanges are closed), frameworks should incorporate hybrid settlement protocols: for instance, time-locked redemptions that align with underlying market availability, or "circuit breakers" pausing onchain trades during off-hours volatility. Platforms like Plume's Nest exemplify this by using auto-rebalancing vaults with liquidity reserves and staggered redemptions, ensuring tokenised RWAs reflect real-world constraints while leveraging DLT for efficiency.
Q70a: Could tokenisation be employed to present riskier or less liquid reference assets as safe and easily tradable instruments, potentially encouraging greater leverage and risk-taking?
Tokenisation can create the perception of higher liquidity and lower risk for illiquid or volatile underliers (e.g., private credit, real estate) via 24/7 onchain trading, potentially encouraging excess leverage if mismatches are not disclosed or appropriately managed.
This is not intentional misrepresentation but a natural economic mismatch: onchain tokens trade instantly while offchain settlement lags.
Effective mitigation focuses on transparency and design rather than prohibition:
Well-designed systems (fully onchain where possible) turn this risk into a feature: real-time price discovery and transparent reserves force accurate risk pricing. Importantly, the secondary trading of tokenised assets onchain reduces reliance on the redemption process to transfer exposure to these assets.
Regulation should require robust disclosures and documented liquidity management on a principles-based basis—empowering informed participation while preserving access to diversified, high-yield assets that drive innovation and capital formation.
Q70b: What measures could be implemented to reduce the likelihood of this occurring?
Tokenisation could potentially be used to present riskier or less liquid reference assets (e.g., private credit or real estate) as seemingly safe and easily tradable instruments through onchain liquidity mechanisms, which might encourage greater leverage and risk-taking by masking underlying illiquidity or volatility. However, the goal of regulation should not be to prevent access to such assets, as they offer valuable diversification and yield opportunities. Instead, frameworks under DABA should ensure that regulators, issuers, and investors are fully aware of these risks and make prudent decisions to mitigate them, primarily through enhanced disclosures and liquidity risk management rather than restrictive prohibitions.
To reduce the likelihood of misrepresenting risks, implement principles-based measures focused on transparency and mitigation: (i) mandatory, tailored disclosures in offering documents and onchain metadata, highlighting the liquidity profile, risk factors, and potential mismatches between token tradability and underlying assets, with plain-language warnings for retail investors; and (ii) adoption of liquidity risk management programs inspired by SEC Rule 22e-4 (as discussed in Q15b, Q15d, and Q69), requiring issuers to classify assets by liquidity buckets, and maintain reserves or redemption gates for illiquid underliers. These approaches empower informed decision-making without curtailing innovation and enhanced capital formation opportunities for investors and businesses alike.
Q79: What key components should a regulatory framework include to effectively address the cross-border challenges associated with RWAs, particularly when the physical asset storage, digital token issuance, trading activities and client bases encompass multiple jurisdictions with differing legal and regulatory requirements?
To effectively address cross-border challenges in real-world asset (RWA) tokenisation where physical storage, token issuance, trading, and client bases span multiple jurisdictions with varying legal and regulatory requirements, we recommend a principles-based regulatory framework under DABA that prioritizes interoperability, mutual recognition, and risk mitigation without imposing uniform global standards that could stifle innovation. Key components should include:
Platforms like Plume's Nest demonstrate these priorities in practice, with built-in tools for compliant, multi-jurisdictional RWA deployment that prioritize onchain transparency and automated accountability, supporting efficient global ecosystems while aligning with diverse regulatory landscapes.
