Operating Systems that don't recover memory/resources after a process terminates

Hello Cred Forums, I'm looking for operating systems in which this code[1] would render all RAM inaccessible, is there even such a crappy OS available?

So far I've tried, Linux, Windows 10 and Windows XP. I'm pretty QNX and MacOS(and all BSDs) handle this situation just fine.

While we're on the subject, could a driver allocate memory in such a way that it would be inaccessible?

[1]: pastebin.com/4ENeLJr4

Why would anyone use an OS where that would be possible?

try temple os. even the author said it'd be easy to crash if you were trying to.

Take out all but one stick of RAM. Boot your OS of choice. Open up Chrome, Firefox, Folding@home, BitTorrent clients, etc., and you'll run out of memory soon enough.

I don't know, basically my C++ professor said that memory not free()'d or deleted was lost until a reboot. I told him that it didn't work like that in modern operating systems, to which he just simply said no. So now I'm trying to find some operating system in which his claim is actually true.

It isn't.
Your professor is an idiot.
Consider changing course.

is there a c++ compiler for it? or just HolyC?

three possible scenarios to what he said
>you didn't understood what he actually said
>he uses the one OS where that happens
>he is an idiot that doesn't do his homework and he isn't up to date with what normal people uses this day and thinks he is above everyone just because he is teaching

Either way, thread lightly, you need to get your grade and move on to better things, not argue with someone in the middle of the road

I'm not arguing with him anymore, he called me names and shit. But I am still curious if it's possible to achieve that in any way(even if it involves kernel-mode and whatnot)

>you didn't understood what he actually said

I wish user, but I literally told him "The OS will recover all resources after the process terminates" and he just said "no."

I actually believe you, third option I gave you is the most common scenario between students and teachers, in fact you came here after trying to fuck your OS ram not only once but three times.

It sucks when you run into someone that it's more concerned to be right than to actually learn something new.

If you ever find it, probably on some old OS, try something like Win98 in a VM, if this thread isn't alive, post it just to check it. It's always nice to find crap to fuck around with old OS in vms or in old machines.

I've heard conflicting answers to this, is it good practice to clean up your memory before exiting? Most people seem to say you shouldn't and just let the OS handle it. Maybe some kind of ifdef for known systems with cleanup, exit immediately if on a known system and cleanup manually if not.

The retard was probably talking about virtual memory. Basically in modern CPUS you map memory to "virtual" locations so you can make it look to have more memory/give more memory than available in the hardware, think of how the swap would work if not.
But malloc sometimes doesn't free these virtual adresses so linux has to shill more of these until you run out, that could be a problem, a rare but albeit a problem.
Malloc probably reallocates over these already used virtual adresses but if you have lots of program doing allocs and then deallocating you can have enough memory but not enough virtual adresses to alloc more.

tl;dr : he was probably thinking about malloc but being a retard about it
pd: fuck jemalloc

I do think that you should clean up the memory you use. Even if only because that way you need to keep track of the memory you use.

But that problem would also cease to exist after the process terminates, no?

>allocate a lot of memory
>run of addresses
>process crashes/gets killed
>OS recovers memory and all of the used addresses

But I have to admit I didn't think of that, another question this problem could only affect systems with massive amounts of virtual memory right?

MS-DOS 6.22/Win 3.11 I think. Never programmed on there so I can't say for sure, but I would definitely get out of memory errors from certain programs, and other programs would crash the OS if I kept the OS on for too long

While the OS will recover the memory, you REALLY want to be doing it yourself. Not for the OS's sake but for the sake of your own programming habits. Don't be sloppy with manual memory allocation. Don't be sloppy with it. It will become a terrible habit if you are. Clean up your memory always.

ALWAYS.

It is possible your professor is lying on purpose to try to scare his students into using proper memory management habits. It's kind of disingenuous but he probably didn't expect faggots like you to miss the point entirely and be pedantic about it.

The OS recovering memory is not the point. The point is you should always tie up your loose ends when you deal with memory.

Older Windows do not free memory on close. Windows 98 for example does not. Whether modern OSes do or not is irrelevant. It's a bad practice either way.

Even old Win9x freed memory when a program closed.

Malloc/new/free/delete simply aren't OS calls. They're C/C++ library calls. The C/C++ library allocates pages of memory with mmap or similar. It then doles out smaller blocks of memory inside that to your code, using its own data structures to track which parts are free.

If your program exist, the OS kernel doesn't care what the libraries or your program may have written into the pages it allocated. It just frees them because they were allocated.

There are some other system resources that shitty OSes don't reclaim properly. Old versions of Windows seem to occasionally run out of GDI handles for instance, and nothing short of a reboot fixes it.

Windows 98 memory management was atrocious, but I'm pretty sure it did free all memory that a process allocated. Where are you getting this?

>It is possible your professor is lying on purpose to try to scare his students into using proper memory management habits. It's kind of disingenuous but he probably didn't expect faggots like you to miss the point entirely and be pedantic about it.

I never said you shouldn't use proper management habits, in reality you should be using smart pointers and RAII. Also I think there are enough good reasons to do memory management properly that it's not necessary to make up fake ones.

hes pressing CTRL+C while the program is running. You dont handle shit about memory when you receive SIGINT. Also cleaning up memory when you're about to terminate doesnt make a whole lot of sense, all you're doing is wasting CPU cycles. Memory management has nothing to do with what happens AFTER your program terminates, it is only of meaning during program execution.

>using c++

Wow, you're professor is retarded.

RTOS like Windows CE lmao. why exactly do you need this? Do you really need it? Not really

Many professors knowledge is outdated to say the least, never argue with them, it doesn't get you anywhere.

This. The sole purpose of interacting with professors is to obtain good grades.

>all you're doing is wasting CPU cycles.
Oh well. CPU cycles are plenty these days.

>Also cleaning up memory when you're about to terminate doesnt make a whole lot of sense
Sure it does. One day you're going to integrate your program into a larger project, or add some more to the end, and then you're gonna have memory leaks because you forgot that you never freed any of the memory at the end.

are you sure he explicitly mentioned the free/delete calls? because atleast on windows, you must free some certain operating system resources that you allocated upon program exit, or the handle might become owned by the parent process, and will be unavailable unless the parent frees it or exits cleanly itself. like handles to registry values iirc.

youre not actually giving any arguments, but whatever. freeing resources before terminating in error situations is a bad practice, you cant do any reliable debugging of the crashdumps and you might even crash in your free-on-crash routine if it's trying to free resources not allocated, masking the root cause.

Don't fuck this up OP, you have an opportunity to call your professor a retard in public.

Drop a steamy shit on his desk mate.

Good luck passing that exam after that.

I don't need it, I'm just curious if such thing exists

Not free() but explicitly mentioned new and delete/delete[], he was only talking about dynamic memory allocation

Show him this code:
#include

const std::size_t SixGB = 6000000000

int main()
{
char * big_arr = new char[SixGB];

return 0;
}

Before you run it, tell him, "this will mess up my computer right? I won't get that RAM back? Run this several times. Seriously, it's so fucking easy to prove him wrong with an example.

I did user, that's basically the OP webm but I'm not showing it to him

why not? scared? :^)

You need to actually use that memory. Allocating it isn't enough. You might go and allocate 100 gigabytes and your system will happily oblige, but unless you fill that 100 GB with data, it's not actually allocated.

He doesn't seem willing to change his opinion, and he's in charge of hiring for a position that I'm interested in.

What compiler should I use in Windows 95?

That's true on linux, Windows doesn't overcommit and will throw bad_alloc if you try to allocate more memory than virtual memory + ram

wow, the pain

Turbo C++ ofc :^)

>While we're on the subject, could a driver allocate memory in such a way that it would be inaccessible?
thats a very general question. a driver usually has access to full system resources if its not virtualized, but not always down to the bare metal. windows for example has userland and kernelmode drivers, the latter can potentially allocate memory on the real/raw address space which would just sit there until freed or rebooted. But its not something a driver usually does, as the memory virtualization happens way before the OS kernel drivers are loaded. And theres also easier ways to destabilize a system if youve got raw ram rw access. Corrupting the interrupt vector table for example can be done in a single cycle on most ISAs.

>he is an idiot that doesn't do his homework and he isn't up to date with what normal people uses this day and thinks he is above everyone just because he is teaching

Smart money's on that one. I outright gave up on CS because of the amount of presumptuous jackoffs who assume nothing has changed since they worked at Bell Labs.

bump

Your professor is right if you're talking about shm, that isn't free'd even after process termination, by design

We're not talking about error situations. We're talking about regular-ass closing, or rather whenever you're done with the memory.

Seems like you're the weak link then who can't just get your work done and ignore the professors.

>I outright gave up on CS
lol

>he didn't know the point of college is to teach yourself shit
>he didn't know professors only exist because the majority of students can't do it

It's just not a guarantee. You're desktop OS dies it, but that doesn't mean a tiny rtos will do it.

shared memory? should it not be free'd after all processes that have a handle are closed? I only have limited experience with shared memory on QNX

It'll just resort to virtual memory, become horribly slow, and eventually spit out an error about insufficient storage space.

>tiny rtos
I know it's not guaranteed, can you name one please? I'd like to play with something like this

I opened a png with mpv with window-scale=100

my computer went nuts

>mouse movements took 20 minutes to draw
>alt tabbing were similarly delayed by 10-20 mins
>no memory to load the ctrl+alt+del screen
>video goes black
>amd driver probably shit itself
>synthwave in the background starts dropping out and coming back
>finally the synthwave stops and the computer starts with some crazy machine noise
>doesn't actually bsods but I gave up recovering the session after the noise get too spooky

I was sitting at 70+/- days up-time when this happned and had firefox and chrome maxed out

only on Cred Forums will someone understand your 1337 tales

I don't think freertos deallocates heap until the idle task runs for example.

It's lost until the process dies

That said the memory DOES have to be freed it's just the colonel that does it for you

>Clean up your memory always.
No.
You don't want to do that if you're just about to exit, all that does is slow down the program's shutdown for no reason.

I mean it is lost until you reboot the program :^)