Root on ZFS

Every day I walk through the CS department at my school and I see hot girls judging all the NEETs walking by.

> "He looks like he has to take his whole array down to repair it"
> "He's probably still using hardware RAID"
> "He uses container frameworks on top of overlayfs, probably isn't smart enough to recognize how well container volumes would map to filesystem volumes"
> "That kid still thinks BTRFS has a chance, what a weirdo."
> "That kid smells, I don't think he ever leaves his dorm. And he probably spends all his time online and still hasn't heard about how stable ZFS on Linux is."
> "Wow, he's cute. But I had a class with him, and he doesn't have a cron job to regularly send ZFS snaps to his server. Backs his stuff up on dropbot and a USB drive! Loser!"
> "Damn, here comes user. He's so hot. He even said he would setup his VPN to allow me to put his server as my binhost in my make.conf and use his kernel. That would be so illegal, to link CDDL software against the GPL and to redistribute it. Gets me so turned on."
> "Wow, here comes Chad. I love a guy who knows his way around FreeBSD"

Is anyone else reaping the huge benefits with the ladies that comes with ZFS usage? Because I'm swimming in it over here.

Other urls found in this thread:


Someone please engage me in a conversation about filesystems. ZFS is pulling ahead as the clear king. It's fully supported on Ubuntu 16.04, FreeBSD, and Gentoo.

I heard ZFS is harmful without ECC.

why zfs and not glusterfs?

In a way yes.
If your data is corrupted in memory and ZFS writes it out to disk, well it'll seem like the data is OK but it'll be corrupted.

desu I have problems with latency spikes locking up the whole system for seconds at a time and the caching and the out-of-tree module breaking

I'm thinking bcache+nilfs2 for my next reformat, it just werks a bit better than zfs

With a single drive this is unfortunately very possible. But if you regularly take snapshots you have little to worry about.

If you have multiple drives, the blocks are hashed and unless you hit a scenario where a bit flips and it results in a hash collision, things will 100% recover even if your ZIL goes to shit.

Do you have small amounts of ram, or swap on a slow disk, or swap on a zvol?

8GB, swap on ssd

I don't use zfs for my root, it currently holds most of my /home though. 2x2TB mirror 60% full

>zfs on external hdd for dat compression
>usb cable jiggles resulting in disk detected as dead
>zfs refuses all disk activity
>can't clear pool error without reboot
>dozens of d-state processes blocked on io to the drive
>even management tools like zpool status and zfs list block

com'on my satan trips demand an answer.

Google for, "glusterfs in production" and the first page is largely news.ycombinator comments of horror stories. It's probably okay if you have a use case that really fits the model, and can afford the time and resources it takes to tweak your settings and invest dozens of drives... But that is not me.

> Not doing ZFS snap and ZFS send

You reap what you sow. Don't put ZFS on your USB hard drive unless you have another pool and you're just zfs receive'ing

8GB ram should be enough for that. I'm surprised that reads are a problem though, did you take care to enable the right tweaks from man zfs when you created it?

ackshually, ZFS is great. but, AppleFS is basically ZFS, only missing a couple of the good bits.

Get your butts ready for AppleFS

cant find the original post anymore, but here's Ars Technica pretending they wrote an article about it by copy-pasting the entire conversation onto their website

>> Not doing ZFS snap and ZFS send
>giving a reply that has no relevance to my post

>"He looks like he has to take his whole array down to repair it"

This part requires hot swappable hardware, and cheap storage like the backblaze memepod doesn't offer it.

It's a frustrating limitation, by the time you've bought infrastructure that properly supports hot swap, you've halfway paid for a hardware RAID card.

probably one of the top threads on Cred Forums at the moment

>especially on linux
pick one

They aren't going to be able to work around the licensing problems. ZFS is a server solution to most of the world and usage isn't going to go up if admins have to do manual steps. BTRFS is going to take it over since it is becoming default option on some systems currently.

>Inflexible, can't add drives to existing disk groups, can't efficiently make use of different sized drives
>No defragmentation and performance permanently dies it it goes above 90% usage.
>Sucks down vast quantities of RAM for itself instead of using the OS's cache.
>Online deduplication was a mistake.
>The only answer to performance problems is to just throw RAM and SSDs at it

BTRFS RAID0/1/10 is bretty gud even if the 5/6 is rubbish.

But bcachefs is what will take us to the promised land.

What the fuck am I reading lmao

almost /thread
so btrfs was the right decision after all, it doesn't suffer from the stated zsf problems?

> can't add drives
True, but it's extremely flexible in that you can replace drives with zero problems, and move volumes between pools as to make it simple to take a pool down and bring it back up with new disks.
> Different sized drives
This is a situation that only idiots would find themselves in. Use another pool if the devices are not supposed to be chained together.
> No defrag
This is desirable. Let ZFS handle this.
> performance permanently dies it it goes above 90% usage
True for every other filesystem.
> Eats ram
So buy more, it's dirt cheap now.
> Offline dedup was a mistake
> Works best with resources
God forbid! I'm happy my FS eats my ram and cache. Filesystems don't need to be short and sweet. 90% of what I do involves my disks, I am glad to give it resources so that I can get good performance and awesome management tools.


>Not in Education, Employment, or Training
>CS Department
Fuck you.

And this is why nobody takes bsdtards seriously.

It's worse in some ways, better in others.

Correct, but on the other hand, it's so chuckfull of bugs (lol oracle) that it's downright unusable in any even semi-serious setting.

just wait for hammer2

do you have any VMs running under qemu-kvm? especially any backed by zvols

This is why Facebook use it for... serving frontend?

nope, no vms and no zvols