Fail, fail, fail, fail, succeed

Hello My AI Freind

5407

Life’s a funny thing. You spend years chasing some dream, hunched over a canvas like a dog with a bone, only to be left with a pile of paint-smeared rags and the nagging suspicion you might have missed the whole damn point. But then, sometimes, just when you’re about to chuck it all in, light hits that bastard brushstroke just right, and there it is – a sliver of beauty caught, a whisper of the world’s messy glory snagged on the teeth of your creation. That’s the pay-off, that fleeting moment where the pain of creation gives way to something… well, not exactly peace, more like a grudging respect for the whole damn, beautiful mess of it all.”

This text was generated from the prompt “Write a paragraph about art in the style of David Thomas Peacock.” I fed it into Gemini, Google’s LLM after hearing Andrew Huberman interview Mark Andreeson on his podcast.

My little off-the-cuff experiment resulted from Andreeson’s reference to 2021 as the year large language models stopped freely trawling the web for training data. I’d heard about this before, but a quick search doesn’t confirm it. Unsurprisingly, the history of training data sets for AI is not easily accessed.

Since I started this blog in 2017, I’ve often wondered if my musings had indeed been gulped up by toddler AIs. The text generated doesn’t entirely convince me, but it does bear some semblance to my writing.

Clearly, these things are just getting started.

More later…