For humans, by human

For humans, by human
Photo by Alexander Schimmeck / Unsplash
I believe humans deserve to know whether content they consume was generated by AI or by other human.

Back in 2022, when ChatGPT was released, everything was about the change. I remember my excitement - I always wanted to start blogging but considered myself as a math guy, who was capable of writing equations, but not beautiful words. So besides bullet points and concise short sentences, I couldn't really create long writings. Here was the first trap, somehow we were convicted that was the way to create content, most likely due how SEO rate websites.

Shortly after playing with ChatGPT, my blog was created because AI could enhance my writing, and sentences were created out of my bullet points. For me it looked like a charm, like a hack. Copywriters were screwed. Obviously I wasn't the only one who had that idea. Out of a sudden, socials, blogs and even documents morphed into perfectly written english books.

But there's a catch, content instead of being better, only got worse.


David - Michelangelo

Humans seek for perfection. We pursue it, it's in our nature. Although, we'll never catch it, we'll always stay imperfect. This is what makes us humans. Whenever something is close to achieving perfection it quickly draws attention. For example sculpture David, looks like a man turned into stone, not carved from it.

Everyone knows Achilles - the best warrior in ancient greek. Was known for being undefeated, but even him, a half god, had his weak point - Achilles heel. The main idea behind it is that we are imperfect.

In data science, there's famous sentence "Garbage in, garbage out". But LLMs demonstrated this isn't true anymore. You can turn garbage like "write blog about Italy, include 5 place to visit. Act as a professional content writer" into a blog post. Even more, you can create blog out of these prompts, twitter account, linkedin account or even a book! What's the value of this content?


Value is created by putting effort. The effort could be interpreted in multiple ways, such as spending whole life to understand one thing and writing a book about it. Or spending a year to paint an image. Maybe it's just writing down thoughts which appeared in your brain. This could be anything. The common factor between them is effort.

If something is created without the effort it doesn't have much value. But with LLMs people can do it. So does content generated that way have value? For me, no. People might argue that prompt engineering is a thing, I agree here, supervising AI is also a thing. But it's not even close to creating something out of nowhere.

Words are just artificial emotions carrier. Emotions are truly natural, words are here only to describe them. You cannot understand words without understanding corresponding emotions. It's like a hash function, f(emotion) = word, and as in hash function, this might lead to collisions. Also it's almost impossible to create reverse function, to get f'(word) = emotion. So how would machine make sense out of words, if in fact they are just words?

Degradation spiral

LLMs are trained on the data. It could be assumed that this data comes from the internet, and prior to December 2022, everything was created by people (I know, I know, ChatGPT wasn't the first LLM but you get my point). So LLMs, are trained on humans. They can use that data to create fast and long answers that looks great at the first glance. But since AGI isn't here yet (Sarah Connor, are you reading this?) it means it cannot invent anything. It's just a snapshot of data to some point in time with inability to move forward.

People now use ChatGPT to learn new things and that's great. But people also use ChatGPT to create content which only floods the internet. Later based on this content LLMs will be trained, with presumably not-so-good outcome. That will lead to stopping the advance of humans, because they'll most likely become dependent on LLM to the point where they cannot thing independently.

We'll reach the point, where everything looks perfect at the first glance, but has no value inside. No value, as no effort was put into making that.

AI Filter

Fortunately, people are great at skipping trash and cherrypicking value. Think about ads on the websites. You don't even read them, you don't even look at them. Without any explicit training you became perfectly trained to skip them. It's magic. Just by looking around you see what's valuable.

The same happens with content generated by AI - whenever I see pictures from midjourney or ChatGPT blogs I just skip. I don't know why, but I just know that's artificial. That doesn't mean it's always worthless, but in most of the cases no effort was put into that. And no effort mean no value. No value equals to disregard as humans tend to seek the value in surroundings.

For humans, by human

That's why I believe humans deserve to know whether content they consume was generated by AI or by other human.

This blog, starting from this post doesn't use any kind of AI to create content nor to enhance the writing. I'm not native english speaker, I am not a professor of philosophy or a PhD in artificial intelligence. I'm a human who questions the status quo. A person who is imperfect, that makes me a human.

Don't get me wrong, I strongly believe latest advances in artificial intelligence are the biggest achievements in this century, or maybe even ever! I'm big fan of them, and I try to incorporate that wherever I can. However, this is what I keep in mind:

I'm more than happy to see you here, and I'm very curious about your take on that topic, cześć!

Maciej Marzęta

Maciej Marzęta

Founder of MarzTech. Python Technical Leader at unicorn startup. Crypto enthusiast and programmer for life.
Cracow, Poland