I got a mad idea, and I blame DOS week for. The background for this is special_snowflake, the FOSH computer I've been building/writing for the last 5 years. Without going into all the details (because then I'll be here for two days), its new CPU is registerless and directly reads and writes into main memory for all instructions. To make it performant, the "main memory" is 0.5-8 KiB is size and treats the actual RAM as expanded memory, copying bits of it in and out. Like a disk. Now, that leaves me with the question of organization of this expanded memory. I decided I won't go the normal route and I won't implement a virtual memory system. Which leaves me with the question of what to do.
And then it occured to me yesterday. What IF you treated your expanded memory as a literral disk and put FAT on it? To allocate memory, processes create a file of some size. Processes can pass memory objects to each other by passing filesystems paths around. All memory objects are dynamically sized. Some implementations of FAT (DR-DOS 6.0 and others) track user&group ID as well as access permissions for them. You could literally implement the well-understood-by-sysadmins Unix filesystem permissions model for all memory! Super important: a single categorization unites ALL of memory which means ALL of memory is always accounted for. Modern systems have all kinds of weird kinks. Inodes, sysfs, procfs, netlink, ioctl, device nodes, semaphores, muliple kinds of sockets, multiple namespaces for all those kinds of sockets, acl, quotas, memory maps, process trees, uids and guids, mounts, etc, etc... If you put all of those into a filesystem, you suddenly make them all observable and manipulable. Another benefit: it's now possible to dump the entire contents of memory to real disks and examine them or change them. Messing with system internals is now easier than ever! And the best part? Both the running memory and the disk image of it can be manipulated with normal filesystem tools!
This can be made to play real nice in a microkernel design with lots of system daemons offering services. I already checked the extensive Wikipedia article on FAT and I found that it practically nativelly supports being used for this purpose. You would need to repurpose some fields from what Microsoft uses them for, but this is a long and time-honoured tradition. The article (linked below) lists several mutually incompatible standards for various data structures that were all used in parallel by several operating systems from several vendors.
Besides breaking the Microsoft's non-standard on FAT, there are two other problems I can see. The first is that my CPU is big-endian and FAT is used on little-endian machines which means the multi-octet fields will be messed up. That can be lived with - this will only come into effect if somebody attempted to mount the memory image on a little-endian machine, but that can be fixed by the appropriate filesystem driver.
The much bigger problem - and the only serious problem I've so far seen - is that FAT keeps track of file contents in a singly linked list. That's.... inapropriate for quick access. :) Since the use of this scheme implies there is only one daemon which manages the memory filesystem for all other applications, and since this is the only place where file handles (or whatever) is handed out, this limitation can be worked around. The filesystem daemon could keep a special structure for all open files/assigned file handles which has a map to all clusters that are part of the file. So if you want to randomly access parts of a large file, the FS daemon doesn't have to read the entire FAT from the start, it can just read from its map. But the problem is that, since this is a memory management scheme, most files can be expected to be open. Which would mean most files would have an associated fast-access map. But then the question is why have the FAT, if most access is going through the map?
Another mitigation strategy is to take care to have the memory defragmented, and then try to assign memory in large contiguous blocks of clusters. That way, if the random access happens inside such a large block, working out the sector (=page) that is to be accessed should be straightforward. The access map cache from the previous paragraph then only needs to keep track of starts and lenghts of blocks. The scheme is simpler, requires less overhead, and should benefit from the last four decades (!) of improvements to FAT drivers and algorithms.
This manage-memory-as-a-filesystem idea can BTW also be implemented on normal registered CPUs, by having some pages (in the reserved section between the sector 0 and the first FAT table xD ) function as faux main memory for currently running processes and the rest used in the FAT.
Wikipedia page on FAT: https://en.wikipedia.org/wiki/Design_of_the_FAT_file_system