In AI there is a framework called, Generative Adversarial Networks (GAN). You have two networks competing with one another. One produces content (the Generator), the other (the Discriminator) judges weather this content is fake, by comparing the generated content with actual real world content. The generator, uses noise (random data) as a starting point and transforms it to something that can resemble a quality content. Once the discriminator starts misclassifying the generated content as real, then the discriminator loses the battle.
At the end of the day you are left with something that can produce quality content out of noise.
Food for thought.
In AI there is a framework called, Generative Adversarial Networks (GAN). You have two networks competing with one another. One produces content (the Generator), the other (the Discriminator) judges weather this content is fake, by comparing the generated content with actual real world content. The generator, uses noise (random data) as a starting point and transforms it to something that can resemble a quality content. Once the discriminator starts misclassifying the generated content as real, then the discriminator loses the battle.
At the end of the day you are left with something that can produce quality content out of noise.
The question is whether you can fine-tune your personal LLM to recognize repeatedly what is good and what isn't, at least until we reach AGI : )