How many posts on this imageboard are made by bots?
About 1/4 of my posts are actually generated by a vary basic markov-chain chatbot I wrote in c++, and no one has seemed to notice yet. It's been posting on IRC and discord for me too
it scares me so I'm going to turn it off
We are rapidly approaching a scenario where computers can write posts, paragraphs, essays, *arguments*, news articles that remain plausible even when subjected to heavy scrutiny, let alone the passing glances that image board posts get. What will this mean for online 'public spaces' like image boards or forums? How about true public spaces like facebook? Will it destroy the ability for us to make decisions as a collective? (if that ability even exists)
>I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
these have already been demoed. The CIA has used GTP-2/3 to study the generation of Islamic extremist documents. You can see these computer programs generate persuasive essays like this one:
here's some more fiction generated by GTP-3:
I dont think these posts even have to knockdown arguments, just the ability to flood the discourse with actual essays supporting your position (no matter how logically shaky they are to a debate-bro) is incredibly powerful. It could very easily drown out any actual discussion in the noise.
Think about how many times communities like 4/pol/ have became places where actual discussion and discourse is impossible The true propagation of 'persuasive' technology just seems like a doomsday scenario for society. A killshot for any kind of public space.
its making me think something else too but I can't put my finger on it.
What is the actual role of rhetoric in society anyway?
I've heard the point made that 'no one is actually convinced by rhetoric, they just want to confirm their beliefs'
what does it all mean?
- B: /qa/R: 31
How many posts on this imageboard are made by bots?
All current systems of control seem powerless if almost everyone had a cute AI girlfriend who they loved start feeding them tailored conspiracy theories and trying to convince them to take certain actions?
who tf is going to take instructions, obey social pressures, laws or whatever when someone they personally love is wholeheartedly telling them otherwise
look at the Q-Anon mind virus. what amounts to shitposting on an image board has effectively driven thousands of boomers completely insane.
If everyones AI girlfriends started trying to convince them to take up arms, it would be over instantly.
I'd assume not that many. But on the topic of bot posts and them passing mostly unnoticed within online communities, one has to ponder about the reasoning for this. Is it that bots have become so good at mimicking humans that they've become indistinguishable from the real thing, or is it due to humans becoming more bot-like within online communities over time. I could go to /pol/ or /v/ or any of the big boards on 4chan and not be able to separate most of the "human" posts there from rudimentary bots. The only thing that I'd assume would really allow one to make a distinction between a human poster and a bot would be one's capacity for creativity. Although, I don't think the question of identity would really ever be brought up for a chat bot as it seem to the poster an odd thing to question for such, at least I would assume to be, harmless conversational posts.
For the point about rhetoric, as the point states, the source of information doesn't matter for most of the masses as long as it fits into their ideology or beliefs. For a group of flat earthers, you could probably make a robot that spits out false information, and as long as it states the earth is flat they'll eat it up. I'd assume rhetoric in general though is just to have an easy saying that conveys a group's beliefs.
It may be a while before the persuasive power of bots becomes as grave a threat as the online bubbles of agreement people have sorted themselves into and been sorted into. When the bots can successfully simulate an entire community dedicated to a target idea, into which they draw a small number of humans, maybe even a single human, so they can gradually pull the human in the direction of the target idea, then they'll become the biggest danger.
>I'd assume not that many. But on the topic of bot posts and them passing mostly unnoticed within online communities, one has to ponder about the reasoning for this. Is it that bots have become so good at mimicking humans that they've become indistinguishable from the real thing, or is it due to humans becoming more bot-like within online communities over time.
The development of increasingly complex AI and the deterioration of online discussion spaces into like/dislike reaction-based self-referential spam have met in the middle. I'd say the latter was done deliberately to allow the infiltration of the former. I don't think intelligence agencies need to develop their bots any further since they already control the flow of information in 99.9% of the internet (0.01% free internet is me being an optimist) but if they do we might even reach the point where we'll have more meaningful online interactions with AI than with real people. "The forgery tries so hard to become the original it surpasses it."
on a personal note, would you resign yourself to having an anime chatbot gf?
ive had online long distance relationships where that was essentially the dynamic, and it felt great at times and terrible at others
the knowledge that you're just talking to a app seems incredibly lonesome though
would you just accept it?
will people in the future just accept that 90% of their online friends are just chatbots or will they try and seek out more and more niche and private communities where they can try and convince themselves they're conversing with real people
I'm not worried for my personal discussions, but it does seem like it'd be another damaging tool for the twitter bot armies and related things, if it's not already happening. The solution is that people shouldn't get their news or opinions from social media, but teaching people to think critically is apparently way too difficult.
Deepfakes have so far not been tied to anything overly damaging despite existing for years, so who knows where this will go.
>fully sentient chatbots
no such thing. there will never be fully sentient ai anything, or at least not within our lifetimes by a long shot. any imitations of sentience or intelligence are just that - an imitation. whether it fools people or not doesn't matter, it is not thinking and not feeling.
Where on the internet would a conversational bot have a hard time fitting in, especially in an Anonymous environment. As long as it's not a small closed group of friends, I don't see how any board of any intellectual standing and enough people would be able to truly identify the bots without actively looking for them, or are you saying that you can?
The solution will be services like discord or local Facebook groups. Online communities will become more centered around people you have personally met or you are able to interact with in person with in a reasonable time. A proliferation of bots will have a similar affect as “fake news” did to authoritative voices. People will distrust large online communities more and more and an easy solution will be the formation of online communities with a verifiable physical presences. Places like Twitter will still exist but I think places like neighborhood Facebook groups will will rise in importance even more. As you will be able to tell who is a bot or not by their affiliation or access to certain forums on the internet.
Maybe the future of conversations on the internet lies in closed circles, paywalled sites and ID-verified accounts. Maybe the bots will be good enough to converse with.
>I've heard the point made that 'no one is actually convinced by rhetoric, they just want to confirm their beliefs'
The internet's lower politeness means disagreements are more evident, but people being convinced of something new by someone else happens all the time as well. Thing is, when that happens it's far more common to act like it was common sense all along. And that's natural, because of course things you agree with make sense.
Imagining this stresses me out but it's very plausible.