How to solve this problem using locking
(Note: this is probably not the best solution to the problem you actually have – see below for an alternative – but I'm writing it anyway in case someone else later has a similar problem for which the alternative answer wouldn't work, and finds this question.)
As @cacfe25 stated in the comments, it is not in general safe to unlock memory unless you are using a custom allocator – there is no easy way to know whether or not the allocator needed the memory locked itself (and on the flip side of that, the allocator won't know whether you locked it, so theoretically it might end up unlocking memory that you locked, although in practice this is unlikely to actually happen).
Additionally, the way you lock memory varies by operating systems. For this answer, I'm going to assume a POSIX operating system (Linux, Mac OS X, etc.) because that's the set of platforms for which I actually know the answer.
For your use case, you apparently don't care about unlocking and just want the memory to remain locked for the lifetime of the process. On POSIX systems, the easiest way to do that is simply to just lock everything (this uses the libc crate to get access to POSIX APIs directly):
use libc::{mlockall, MCL_CURRENT, MCL_FUTURE};
pub fn main() {
// Safety: this is a valid flags argument to `mlockall`, and it has
// no other safety restrictions.
if unsafe { mlockall(MCL_CURRENT | MCL_FUTURE) != 0 } {
panic!("Could not lock memory");
}
}
Using MCL_CURRENT and MCL_FUTURE together guarantees that memory is locked regardless of whether you allocate it before or after the call. If you're sure that the memory has been allocated already, you don't need the MCL_FUTURE and can just use MCL_CURRENT (however, bear in mind that Vec doesn't always allocate all its memory immediately – you would probably want to use Vec::with_capacity or even Vec::try_reserve to force the allocation to be done in advance).
Also note that MCL_FUTURE may cause future allocations to segfault if the memory exceeds the limit on how much memory a single process can lock at once (on most shells, you can view and change this using ulimit -l): this segfault isn't a sign of undefined behaviour, it's just the way that the operating system reacts to an impossible memory allocation. Locking 70% of your memory into a single process would be unusual and operating systems tend to frown on it, so you might need to use root permissions in order to increase the limit.
For your use-case, locking all the program's allocations seems like a better option than locking just some of them, because a) it's easy and b) you're trying to see if the program would run correctly on a system without swap, and in that situation the entirety of the memory used by the program would have to fit into main memory (not just a single vector).
If locking everything isn't an option, you will need to align the memory you lock to a multiple of the system page size. There's a problem with this, though: the system page size is a property of the processor on which the code is being run, so unless you are building the program for a specific processor (and hardcoding its page size), you'll have to determine the page size at runtime (which in turn makes it hard to use the standard Vec and standard allocator simultaneously because Vec doesn't have an easy way to vary the alignment of its allocations). The best solution is probably to use Vec::with_capacity_in with a custom allocator which just wraps the global allocator and increases the alignment – but that requires the use of an unstable API whose details are probably going to change in the future.
An alternative solution, which may be better
It sounds like your actual problem is not "how do I run this program with all its data locked into memory" but "how do I ensure that this program is not using so much data that it couldn't store it all in memory" – in other words, you don't care about where the data is stored, just that there isn't too much of it.
Just like there's a typically a limit on how much memory you can lock at once, operating systems typically also allow you to set limits on how much memory the program can use in total. On many operating system / shell combinations, you can use the ulimit command for this before running the program (it affects all future commands in the same shell / terminal session, but the effect ends once you close the shell / terminal). For example, on Linux using bash,
ulimit -Sv 4000000
would impose a virtual memory limit of 4 million kilobytes. (There are various different sorts of memory limit, but a virtual memory limit is probably what you want for this: it limits the maximum amount of memory that the program would be allowed to read/write, even if some of it gets allocated lazily and even if some of it is never actually touched.) The details of how you set a memory limit depend on both the operating system you're using and the shell you're using (because, in a way, it's a setting that "belongs to the shell"), although it wouldn't surprise me if they were very similar in practice across a range of operating systems and shells.
(The documentation of ulimit may mention "soft and hard" limits – the difference is that if you set a soft limit you can increase it again within the same shell, but if you set a hard limit it is permanent until the shell exits. Typically, you would use soft limits if the limit is imposed for testing purposes, and hard limits if they're being imposed for security purposes. My example above uses a soft limit, which bash expresses as -S.)
I nearly always use this general technique to set a memory limit on programs when I'm testing them, so that if the program starts using an unbounded amount of memory, I get notified that something is wrong before the computer runs out of memory. In your case, you could probably set a memory limit equal to the amount of memory you expected to be available in production (minus a small percentage as a safety margin), and see if the program was still able to run with the limit in place. (You can test that the limit is working by intentionally setting it too low to see what happens.) As a nice bonus, you usually get a reasonable error message (rather than a segfault) when a program fails due to hitting a virtual memory limit, so if there is something wrong, this technique is helpful in working out what's causing the issue.
mlocking aVecs data) but unlocking could be problematic. Also the address might be required to lie on a page boundary which is not guaranteed for the standard allocator.