Absolutely it is recorded in a retrieval system and doing some sort of complicated lookup.
It is not.
Stable Diffusion’s model was trained using the LAION-5B dataset, which describes five billion images. I have the resulting AI model on my hard drive right now, I use it with a local AI image generator. It’s about 5 GB in size. So unless StabilityAI has come up with a compression algorithm that’s able to fit an entire image into a single byte, there is no way that it’s possible for this process to be “doing some sort of complicated lookup” of the training data.
What’s actually happening is that the model is being taught high-level concepts through repeatedly showing it examples of those concepts.
I would disagree. It is just a big table lookup of sorts with some complicated interpolation/extrapolation algorithm. Training is recording the data into the net. Anything that comes out is derivative of the data that went in.
It is not.
Stable Diffusion’s model was trained using the LAION-5B dataset, which describes five billion images. I have the resulting AI model on my hard drive right now, I use it with a local AI image generator. It’s about 5 GB in size. So unless StabilityAI has come up with a compression algorithm that’s able to fit an entire image into a single byte, there is no way that it’s possible for this process to be “doing some sort of complicated lookup” of the training data.
What’s actually happening is that the model is being taught high-level concepts through repeatedly showing it examples of those concepts.
I would disagree. It is just a big table lookup of sorts with some complicated interpolation/extrapolation algorithm. Training is recording the data into the net. Anything that comes out is derivative of the data that went in.