Who Created our Artificial Unconscious?

People think there’s an intelligence that tailors our social media feeds, and the videos that our kids watch on YouTube kids. In reality Ai behaves more like the automatic decisions of our unconscious mind.

Welcome to AI Parenting Live where we show you how to disrupt the invisible forces that control your family today. Our tag line is don’t sedate, relate to create. Today we are going to focus on the sedate theme by talking about the machine that tailors the videos that we watch, and the social media feeds that occupy both our time and minds.

We’re going to learn how screen time today is different from when we watched TV as kids.

Who created this new artificial unconsciousness?
What can we do about it?

There are three main reasons that I’ll explore: public reactions to online creepiness, limited political liability, and unconscious input.

Public Reactions to Ai Creepiness

They’ve stopped the creepiness, but not the creeping

Dr Ed Tse founder of Ai Parenting

An enraged father stormed up to Target’s Customer Service and demanded to speak with the manager about the flyer clenched in his fist. This flyer was addressed to his daughter. “You’re sending her coupons for baby clothes and cribs?” he yelled “Are you trying to get her pregnant?”

The customer service representative promised to raise the issue with the manager when he arrived that afternoon. Sure enough there were pictures of maternity clothing, vitamins, and pictures of smiling infants in the crumpled flyer. The manager took a deep breath and called the father to apologize.  

The father responded “I had a talk with my daughter, it turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

This story is important for two reasons: 

First, it shows how it is easier to hide information from your family than it is to hide it from a computer. Parents will have less information about our own children’s activities than ever. It’s uncomfortable to admit that Netflix, Disney+, and YouTube kids know more about your child’s interests that you do. 

Second, rather than changing their creepy behavior Target simply changed the way that they showed pregnancy related ads. Mixing coupons for baby cribs along with other household items reduces suspicion from eagle eyed parents. In other words, they decided to stop the creepiness… but not the creeping. 

Countless examples of creepiness is why the decisions of AI had to move below the surface and out of sight. Nobody wants a peeping Tom lurking into every single aspect of their lives, so it’s better to make it seem like AI decisions are convenient coincidences and not deliberate decisions. The added benefit is that even if the prediction is wrong you will be none the wiser. 

Existing regulation has has focused on notifying and requiring consent rather than stopping tracking or granting you specific rights to your data. Sites require that we agree to using cookies even if we don’t know what that is or how it works. That said, the visual of a delicious treat that you’re going to regret later is quite appropriate. Tracking notifications work the same way. Companies don’t have to offer you service if you do not agree to be tracked. Want to see a non-marketing sponsored Internet? Just say no to every cookie tracking request. You’ll find that many sites won’t show anything until you reluctantly agree to be tracked.  

Limited Political Liability

These companies aren’t being regulated…

In the Wolf of Wall Street, Jordan Belfort’s company Stratton Oakmont was found guilty of fraud related to an initial public offering. In reality, this company sued Internet provider and news service Prodigy in 1994 for posting an article that claimed Stratton Oakmont “committed criminal and fraudulent acts in connection with the initial public offering of stock”. 

Prodigy fired back saying that they should not be liable for content posted by their users since a similar case was dismissed by Compuserve 4 years earlier claiming that companies could not be liable for the content posted by their users. 

The US Supreme court decided that Prodigy was liable for the content created by its users because it exercised editorial control over the messages by posting Content Guidelines, enforcing those guidelines, and using screening software. This meant that putting more effort into moderation could mean more liability for a company. This is strong incentive for companies to minimize the amount of content moderation that they do to the bare minimum. 

US Congress overruled this decision when they passed the Communications Decency Act in 1996 which includes the now famous 26 words in Section 230 that created the Internet content world that our families experience every day:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This legislation along with the Digital Millennium Copyright Act of 1998 provided a safe harbour for Internet services to act as content libraries without the fear of being liable for content that was posted.

https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.

What this also allowed was for AI to make decisions about which advertisements to show you without any human intervention. The limited critical thinking of computers means that sometimes bad posts slip through the cracks and in the end companies are not liable for the content that is posted. 

AI can defeat grandmasters at chess, 9th Dan go players, and top contestants at Jeopardy. Yet ask the AI if a video or post is appropriate for kids and it will struggle. This distinction between mechanical thinking and critical thinking is that machines can work out possibilities much quicker than people. Yet critical thinking remains the domain of humans. 

This particular legislation allows AI to operate without consequence or risk of liability, although this is starting to chn

https://www.theatlantic.com/ideas/archive/2021/01/trump-fighting-section-230-wrong-reason/617497/

Unconscious Input 

Clicking a video, liking a post, or pressing the buy button all requires conscious action from us, but companies need more information to get a complete picture of your interests and desires. 

A Facebook engineer reported that the average view time for a single post on their feed was around 1.4 seconds. If you watched a post for say 5-6 seconds then you would be watching 4-5 times more than average. You might not consider watching a post for a few seconds, or scrolling a blog article to be an explicit action. But this information is shared with Facebook through the 

Yet this same data is being used by AI to make decisions about what ads to serve you.

Social media plays a role in priming customers to buy products. Gone are the days of banner ads with a different shape and style than the rest of the page. Most online advertisements use the same format as regular search results or posts from your friends. 

Another level of online advertising is by exploiting how our brains are wired. We tend to trust sources that we have seen over and over again. So a post that you watched for a few seconds longer could be followed up with a different post from the same company. Perhaps you will even see a third post about the same topic before you ever see any call to action. By this time you’ve seen a company that you’ve never heard of at least three times, reducing any trust barriers that you have to clicking on the link.

The same premise is used with online influencers. When you see an online influencer you implicitly trust their advice because you have seen then many many times. Since many influencers make their money through sponsorships, you’ll find the advertisement delivered in the voice of the influencer and embedded into their video.

Worried about Screen Time?

If you’re worried about screen time, we’re here to tell you it’s not your fault. No one could have predicted how Ai would fundamentally change the world that we live in. That’s why it’s important to stay on top of the most recent trends.

Apply for Ai Parenting Insider and we we’ll gift you our popular Screen Time Quality Chart ($15 value) to help you move from Screen time to Quality time.

1 thought on “Who Created our Artificial Unconscious?”

  1. Pingback: Algorithms of Oppression - Book Review - Ai Parenting Live

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart