The following script can create a seg fault:
It seemed like trying to access a negative index should return a nicer error message, though on looking closer, I don't think it's easy to do that unless we remove the ability to assign to arbitrary vector indices (causing automatic resizes).
The problem is that the index expression would get evaluated first, result in wrapping an unsigned count value to 4294967295 and passing that as the index for vector's assignment, which is then ok with trying to resize the internal vector to 4294967295 + 1 , which actually wraps back to 0 and truncates the vector. The subsequent assignment to the internal vector at that index then accesses an invalid index.
A potential fix can't just be about checking around the storage capacity boundary either: if you're off by a few instead of exactly -1, then allocating ~4 billion elements without crashing is still a bad thing. Or it may not even be code that wraps the integer storage capacity, you might just have some code that unintentionally assigns past the end of the vector without ever realizing it – it doesn't crash, but it's also allocating your vector in an odd/inefficient way.
I think the most intuitive thing is to restrict assignment of vector indices to only those less than or equal to the vector's size. E.g. this is still valid:
This would not be:
The above currently assigns 10 "nil" values to indices 0-9 and you can check for existence w/ the 'in' operator. A 'for' loop also only goes over the existing indices.
If this type of thing were still desirable, I think it would be better to have an explicit "resize_vector" BIF.