Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FRAME Core] Remove without_storage_info on pallets #323

Open
27 of 40 tasks
shawntabrizi opened this issue Apr 16, 2021 · 0 comments
Open
27 of 40 tasks

[FRAME Core] Remove without_storage_info on pallets #323

shawntabrizi opened this issue Apr 16, 2021 · 0 comments
Assignees
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework.

Comments

@shawntabrizi
Copy link
Member

shawntabrizi commented Apr 16, 2021

paritytech/substrate#8556

Need to update all instances of Vec in the runtime storage to BoundedVec, which will allow us to start generating worse case scenario PoV sizes.

Polkadot:

  • Pallet XCM
  • XCM in general.

Infra stuff:

  1. PalletsOrigin @gavofyork MEL: Origin, Referenda, ConvictionVoting substrate#11631
  2. Call should impl MaxEncodedLen. Two options:
  • Any dispatchables whose args do not all impl MaxEncodedLen should be marked #[pallet::call(unbounded)] and will not be included in Call. Introduce an additional UnboundedCall type which gets created in parallel to Call, and includes all dispatchables but does not impl MaxEncodedLen.
  • Introduce a new Signed Extension AuxData, from Preimages pallet. This places one or more blobs of data in a non-persistent storage item of Preimages pallet, indexed by its hash. It can be a map which is cleared at on_finalize but would ideally use Transient storage runtime host function #359. The data can be retrieved through the usual Preimages API in exactly the same manner as the persistent preimages. Large arguments for Calls dispatched immediately in a transaction would be encoded and placed in this AuxData Signed Extension and referenced through hash. If such a Call were expected to execution autonomously on-chain then the preimage would need to be registered as usual. note_preimage would use AuxData, set_code would use (via the trait interface) Preimages pallet.
  • Call need not impl MEL, instead we can just avoid storing it directly but do so through a preimage. Bound uses of Call substrate#11649
  1. #[pallet::storage(write_only)] for items which cannot/must not be read. the MEL is allowed to be undefined. in this case, and if a new production feature is enabled, then there should be no storage API to read the item (this is a larger refactoring and will need additional changes to storage macros). this together with marking the low-level storage accesses as unsafe APIs will give build-time safety for PoV size.
  2. #[derive(MaxEncodedLen) should respect #[codec(skip)].

Before any pallet is checked off above, the final step in this process would be do remove the without_storage_info macro:

	#[pallet::pallet]
	#[pallet::generate_store(pub(super) trait Store)]
	#[pallet::without_storage_info] <--- here
	pub struct Pallet<T, I = ()>(PhantomData<(T, I)>);

This tells us that all the storage item in the pallet are compatible with the storage info trait.

@KiChjang KiChjang changed the title Migrate Runtime Storage Vec to BoundedVec Enable generate_storage_info on pallets Jul 18, 2021
@shawntabrizi shawntabrizi changed the title Enable generate_storage_info on pallets Remove without_storage_info on pallets Apr 21, 2022
@kianenigma kianenigma changed the title Remove without_storage_info on pallets [FRAME Core] Remove without_storage_info on pallets Jun 2, 2023
@juangirini juangirini transferred this issue from paritytech/substrate Aug 24, 2023
@the-right-joyce the-right-joyce added T1-FRAME This PR/Issue is related to core FRAME, the framework. and removed C3-medium labels Aug 25, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
github-merge-queue bot pushed a commit that referenced this issue Jan 29, 2024
Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of channels that can be suspended at the same time.
- Limit the number of channels that can have messages or signals pending
at the same time.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 4 to have our tooling remind us to deploy that migration.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
github-merge-queue bot pushed a commit to Cardinal-Cryptography/aleph-node that referenced this issue Feb 14, 2024
# Description

Complying with paritytech/polkadot-sdk#323 in
two pallets (out of five). The other (`aleph`, `committee_management`
and `elections`) contain vectors, but they are sudo-controlled, so we
should be kinda safe (although we might want to come up with some size
bounds there at some point).

## Type of change

- Bug fix (non-breaking change which fixes an issue)
ggwpez added a commit that referenced this issue Apr 2, 2024
Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of channels that can be suspended at the same time.
- Limit the number of channels that can have messages or signals pending
at the same time.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 4 to have our tooling remind us to deploy that migration.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
github-merge-queue bot pushed a commit that referenced this issue May 16, 2024
Re-applying #2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
hitchhooker pushed a commit to ibp-network/polkadot-sdk that referenced this issue Jun 5, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this issue Aug 2, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework.
Projects
Status: Draft
Status: In Progress
Development

No branches or pull requests

5 participants