You can make a hash out of a file

>you can make a hash out of a file
>modifying one single byte of a file modifies the entire hash

Why can't you make a file out of a hash?

Other urls found in this thread:

en.wikipedia.org/wiki/Pigeonhole_principle
en.wikipedia.org/wiki/Parchive
en.wikipedia.org/wiki/One-way_function
twitter.com/NSFWRedditImage

You can, it just takes a huge amount of computing power.

There are infinite number of files that have the same hash

why you cant brute force taytay nudes?

You can, I do it all the time to save space

You should use the base64 hash function.

Because multiple files can have the same hash.
3+2 has a single output, which is 5, but 5 doesn't have a single input. You could turn it into -7+2, 1+1+1+1+1, etc. Hashes don't have backtrack functions, encryption does.

>Why can't you make a tree out of a pile of ash?
gee I don't fucking know

Not the same thing. Such an act wouldn't violate any physical laws (even though it's extremely unlikely to happen), but the information to get a file back out of a hash simply isn't there.

en.wikipedia.org/wiki/Pigeonhole_principle
Basically, a hash with n bits of output can only carry up to n bits of data without any ambiguity. If you try to store a 257-bit file with an 256-bit hash, you lose 1 bit of data.

you think the information to get the tree out of the ash is there?

if you add enough energy to it, it would be possible to get the wood back

kekd, heres your (You)

What's stopping me from building a wooden house instead?

That ash can be reversed into an infinite number of configurations.
It could even be made back into a tree, but it won't look like the original tree.

>>(OP)
What specific math problem had the correct solution 14?

for any practically unique hash function, "decompressing" any file larger than the actual hash will take a fucking eternity.

It couldn't be made back into a tree, trees have dna and that would have been destroyed. There is no way to know what the original dna was

we should have a program that generate random files
another program reads those files as if they were jpegs and saves any picture that resembles a human form
we then sift through the photos until we score celebrity nudes

Why do you think DNA is special? You have to recover protein structures of millions of tree cells and you only think of DNA.

With one way hash functions you can't, you need a two way hash function

>two way hash function
even those have collisions which could ruin your result

this nigger is genius.

Even if you clone something, you need the rest of the cell to inject the dna into.
And DNA doesn't contain enough info to completely reconstruct anything.
There is some type of missing formula that must be applied.
In the same way formulas derived from fractal geometry are now used to enhance sattelite imagery and compress and decompress data.
One day we may be able to reconstruct a living thing from nothing but a dna sample, but we'll need better maths.

If you want to see some magic like that go play with PAR files.

>create a 10MB par file out of 100MB of data
>hypothetically 10MB of data on that 100MB file gets destroyed, in any location of the file
>can be recovered with the 10MB par file
This is more of a best case scenario, but still not far from reality.

en.wikipedia.org/wiki/Parchive

They already have that, but it's top secret stuff.

oh shit do you work for "them?"

I'll go outside and wait for the van

I've had this idea before, partly inspired by an episode of avgn.

>Okay, here's a really weird one. "If Mario Paint has 41,664 dots available

If you had the hash (using a hash function with no known collisions), the file size, and the file type, would it not be reasonable to expect a very small subset of files that match the hash, has the same amount of bits, and can be interpreted using the given file type?

Holy shit

You've convinced me.

How do I kill myself now

you can it's called SHA256SUMS

I guess so, yes.

Depending on the hash, most of the data is lost when converting to a hash. The value of a hash is discovered by hashing random values, and comparing those hashes to the hash you have. Usually strings of words longer than 20 characters start to take years to discover their hash values. A file could have millions of characters, taking longer than the age of the universe to find the hash. Hashes may also be designed to take computationally longer, making such techniques take longer.

I wouldn't expect the subset of files to be very small, just smaller. As file size increases by one bit, you have twice as many possible collisions.

Just an average day in Cred Forums. Questioning the mathematical and logical structure of our would and arguing over the correct answer. Even when we know there is no correct answer.

This. Not that it would matter, though, since not a single man would be able to witness a completely brute forced reconstruction of a decently sized file through its hash.

We do not know whether one way functions are a thing though

en.wikipedia.org/wiki/One-way_function

Can you count how many collisions there are for a certain file size?

Approx 2^(n - hashLength)

>we

But is this even used anywhere now?

Yes, you can make a file out of a hash
>.MD5
>.SHA
>files

Usenet in combination with rar file all the time. Par can repair missing, incomplete, damaged rar archives.

>with no known collisions
Do you know what you're talking about? It's pretty easy to generate a set of files so that at least one pair leads to a collision when hashed