It's far from a disaster because the best way to deal with a list is still to use a contiguous array.
Allocation isn't free. You have to do it sometime and doing it all in bulk is going to have the same consequences as just one small allocation anyway. If you are mapping new pages into main memory, making system calls or looping through a heap structure, you're not only affecting the data cache but also the TLB while also pointer chasing.
Making a list out of an allocation for every element is performance suicide on modern computers. Even the amortized resizing of an array is done at the speed of memory bandwidth because of the prefetcher. Thinking that is some sort of performance hindrance is absurd because it outweighs the alternative by orders of magnitude. If there really are hard limits on the time every insertion can take, then the solution needs to be links of large chunks of memory to still minimize allocations on every element.
Obviously the compromise is to use a linked list, but preallocate a contiguous block of memory to hold all your linked list nodes. Like an array of linked list nodes, each with a pointer to the array index of the next node.
Then you just need to double the size of that block of memory and copy all the nodes to the new block whenever it fills up.
Hmm, but you’d probably end up needing some kind of a list of free blocks of space within that array to deal with fragmentation… you’d need another linked list to store those blocks in.
If only there was some sort of system that could take care of all that allocation and freeing of memory for you?
It looks like you are conflating a linked list with a heap. Once you have an array/vector for your node list the unused memory can be a list itself, each index pointing to the next free index. You just need the next free index to get memory and you point to the current free index when you free a node. You don't need to manage the memory further than that.