Rude Slop Behaviour
This essay aims to introduce social stigma around certain types of AI use, or at the very least, reduce the amount of slop I receive.
What is slop?
I like the deepfates definition:
Watching in real time as “slop” becomes a term of art. the way that “spam” became the term for unwanted emails, “slop” is going in the dictionary as the term for unwanted AI generated content
Slop characterises a certain lack of care that always existed in society. Slop is the AI-flavoured cousin of what philosopher Harry Frankfurt would call Bullshit.
When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he consider his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says.
Bullshit is corrosive because unlike a lie which at least concerns itself with the value of truth, bullshit does not place itself on this axis at all. Similarly, sending AI slop to other people is an act that least concerns itself with any kind of social proceedings or connection, whether benign or charged.
Frankfurt observes that bullshit proliferates when people are obligated to speak on topics, whether or not they possess the knowledge or certainty for their position.
AI slop is produced under exactly these conditions, but industrialised.
Impolite AI use
If you send AI-generated output to communicate with me, you are bullshitting me and I will call you out on it.
AI itself is not the problem. It is the slop merchant's lack of consideration. When I'm transacting with people, I want to engage with their mental picture. They must not be a low-effort mediator for an AI system that I could transact with directly. A person may feel free to use AI insofar they have incorporated the results of their usage of it in their mental picture.
If you want to engage with me, you have to write the output yourself as a proof that you have assimilated whatever you have read or learnt (whether from an AI or not) when you are communicating with me.
The asymmetry of responding to slop
An important feature that elevates slop to the highest echelons of rudeness is the asymmetry.
Slop is easy to produce and easy to spread.
But slop is especially hard to counter. This is because, unlike the more classical flavours of bullshit, slop is produced by language models that are competent at producing grammatically correct, formal and convincing text. It is, in fact, often correct in the isolated setting in which it is produced. But it is totally unconcerned with ideas of correctness or wrongness, for it lacks the total context of the social transaction between two parties. It is wrought with a certain purposelessness and detachment to meaning. Even worse, it offers a certain plausible deniability that is often taken advantage of.
The responder of slop now has to:
- expend more effort to read the output than what was required to produce it,
- potentially engage with possibly correct but contextually detached AI output,
- figure out what the slop sender's actual mental picture is,
- and respond to further easily produced slop
This asymmetry of effort makes this a socially violent act.
My slop policy
The asymmetry means that if I suspect that I'm on the receiving end of AI generated output, I will duly inform the other party that I have not read the output due to my suspicion of it being AI, and will ask them to expend the necessary effort to tell me in their words, what they want to tell me. This chance for correction is necessary because the pressures of modernity have conditioned people to produce bullshit and, have even rewarded this production.
If I continue to receive slop after my gentle nudge, I will refuse to engage and end the social proceeding entirely, which is in line with how anyone would respond to other forms of rudeness.
Appendix
Detecting AI output
How do I know if someone is sending me AI output? There are lots of tells. I am an extensive user of multiple AI systems, and I'm especially attuned to the subtleties of their writing style and content. Most people who extensively use AI can reliably detect AI output.
If someone has edited or iterated away the signs of AI writing to the point that I cannot tell that its AI any more, they have likely expended a lot of effort and have fully assimilated what they want to tell me, and thus, I'd have no objection to it. This is not bullshitting.
What if there are false negatives, which would lead me to incorrectly assert that someone's handcrafted output was done by an AI? I am willing to pay this social cost, given the asymmetry involved with dealing with slop. As it is, such a scenario is likely to occur when the person's output has the same asymmetric quality as AI slop—in which case my negative signal might not be so bad, even if it's the wrong negative signal.
Reviewing AI code
As a software engineer, my policy looks different when dealing with AI code. Unlike language or image, code is categorically verifiable—it either achieves what it needs to do or it doesn't. I will in fact encourage people to adopt AI-assisted coding if it makes them produce better work more efficiently.
That said, the burden of verifying AI generated code and demonstrating that it works is on the owner of the code and not me.
If you are asking me to review AI generated code:
- You should have read, understood and assumed accountability for the quality of all the AI generated code.
- It must contain proof that it works—you must show me that it passes test cases and demonstrate correct functioning.
- Your description and explanation of code should not be slop—it should take more effort for you to produce the explanation than it must for me to read and understand it.