You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current code maintains a entry_seqno_index with a ccf::indexing::strategies::SeqnosForValue_Bucketed<EntryTable> that's essentially a list of the seqnos at which there is a write to the entry table (i.e. CTS business transactions as opposed to CCF internal/governance).
This is exposed by the /entries/txIds endpoint, presumably to facilitate scanning through all the CTS ledger entries using the API, as opposed to the ledger files.
This may be a premature and ultimately harmful optimisation, because it trades off an ever-growing in-memory index for being able to skip deserialising what should amount to a fairly small amount of transactions normally (governance is rarely the major part of a ledger). Aside from the memory cost, this will also greatly increase first-response latency on new nodes, which currently won't respond usefully before they can build the index up to that point. An index-less historical query will come back much faster in that situation.
My sense is that it would be best to:
Remove the index
Convert /entries/txIds to a regular historical query, if it's needed at all
Establish one or more benchmarks that we think /entries/txIds should meet, and decide what, if any optimisation is needed
The text was updated successfully, but these errors were encountered:
The current code maintains a
entry_seqno_index
with accf::indexing::strategies::SeqnosForValue_Bucketed<EntryTable>
that's essentially a list of the seqnos at which there is a write to the entry table (i.e. CTS business transactions as opposed to CCF internal/governance).This is exposed by the
/entries/txIds
endpoint, presumably to facilitate scanning through all the CTS ledger entries using the API, as opposed to the ledger files.This may be a premature and ultimately harmful optimisation, because it trades off an ever-growing in-memory index for being able to skip deserialising what should amount to a fairly small amount of transactions normally (governance is rarely the major part of a ledger). Aside from the memory cost, this will also greatly increase first-response latency on new nodes, which currently won't respond usefully before they can build the index up to that point. An index-less historical query will come back much faster in that situation.
My sense is that it would be best to:
The text was updated successfully, but these errors were encountered: