Skip to content

Why am I writing about social media?

human skull coming apart as social media rots our brains

What are social media doing to us? Photo by Erick Butler on Unsplash

Back in September, I wrote about social groups and the evolution of social media. These topics are decidedly outside my realm of expertise (although social groups are a product of animal behavior so not totally removed from my own private universe). Why am I doing that, and why am I continuing today, even delving into Artificial Intelligence? What has got into me?

I think the current state of social media and the developing intrusion of AI into all aspects of our lives explain many of the problems in the world and point to much bigger problems to come. Now, it is possible that I am a reactionary old fuddy-duddy unable to acknowledge that the world is and always has been a place of change. It is also possible that my concerns are not entirely misplaced.

I came to my current jaundiced view of social media as I struggled to understand how it was that despite all the evidence and all the dedicated scientific study supporting and adding to that evidence, the majority of humans on this planet do not see climate change, or environmental change more broadly, as the existential threat that I believe it is. (That is ‘existential’ as in ‘threatening our existence’.)

Go back to 2009 when I was in the midst of writing my 2011 book, Our Dying Planet. I was convinced that the reason most people did not ‘believe in’ human-caused climate change was simply that the facts had not been well enough explained. If people had the facts, surely they would realize that what the scientists were warning them about was real. And I, of course, could be the person to present those facts in an effective manner. Following publication, I learned a lot about human behavior that I probably should have known already. People do not make decisions based on a rational evaluation of the facts. We decide based on emotion.

I tried to tell the story of how I came to this understanding in my 2021 book, Coral Reefs. It was a difficult lesson to learn, because I had spent my life as a scientist – a person trained to collect information (facts, if you will) about a topic, and make rational determinations based on those facts. I liked to believe, oh how naïve, that I proceeded rationally from decision to decision, in my work and in my life. And I presumed, even more naïvely, that most people were a lot like me. But, of course, they are not. We all make decisions based on our beliefs, desires, gut feelings. Even using ‘decision’ in that last sentence is inaccurate in that it implies a rational weighing of evidence prior to acting. That is demonstrably not what we do. Usually we act, and then, perhaps, gather supporting evidence later.

In revealing this superficiality of my understanding of human nature, I am not embarrassed. Because I know that the great majority of environmental scientists hold beliefs about human ways of thinking and acting that are very similar to mine – still wrong, but it helps to know that lots of us are wrong! But while scientists can be wrong in assuming that facts matter when people ae making decisions about their lives, we are not wrong in believing that the scientific process is capable of revealing much about how the universe is constructed and the way in which it works. I’ll stake my reputation on the belief that there is such a thing as reality, with physical laws that determine how things happen. While that glorious intellect, Kellyanne Conway, introduced the world to alternative facts on 22nd January 2017 – a day which, to quote a truly great US President about a different day, should “live in infamy”- I think this universe is governed by real facts, and that science is a highly effective set of procedures to unravel those facts. Where I have been grievously wrong is in believing that most people use real facts (or our best understanding of those facts) in making the decisions they make in their lives. Instead, we use our beliefs in a not very rational way to make the decisions that guide our lives.

Consequences of the biases inherent in social media

And so we come to social media. Whether we value them highly or not, social media provide the spaces in which most people spend very large parts of their waking lives. And social media have been designed to make themselves very attractive spaces.

We like to interact with people we agree with. Back in an earlier time, say mid-last century, people mostly communicated with each other while in physical proximity. We learned to tolerate different opinions and we accepted that different opinions existed. Sometimes, when differences became too pronounced we would take sides, avoid contact, or even take up arms against one another. Mostly, we learned to get along. Social media have made it easier than ever before to align ourselves with those who seem to agree with us. Further, they minimize the need to be confronted by alternative perspectives – unfollowing and ghosting are far less painful than taking up arms.

Going through life, we have formed our own opinions and beliefs, comparing always with members of our groups, building our individual models of reality. When we do lapse into periods of careful, rational exploration of ideas, we continue to compare our understanding with that of our peers – those other members of our groups. In a world of social media we tend to find lots of agreement.

Perhaps because of fundamental limitations like the inability of the human mind to handle more than about seven separate items at once, we prefer simple explanations of events over more complex ones and expect unitary causes of events rather than multiple ones. We also prefer explanations that conform to our expectations over detailed explanations that may reveal flaws in those expectations. When confronted with the behavior of complex systems, such as the biosphere, most people struggle to keep track of the multiple pathways between causes and events. Given that in most instances we are seeking explanations that conform to our beliefs, we are very prone to accept simple explanations even when they do not fit reality particularly well.

The emphasis on short form communication typical of social media has greatly encouraged this preference for simplicity. The increasingly graphical nature of social media communication has further reduced the need for, or value in, longer forms of written communication. Detailed explanations of why release of greenhouse gases leads to a warming climate are not going to be communicated very well when the average message is limited to 144 characters.

The design of social media platforms, and the behavior of individuals hoping to be heard on social media conspire to encourage more surprising, more unexpected, more extreme, more outrageous statements. So long as they generally support our preconceived expectations, these are the statements we promote to our fellow users. Promoted this way, they slant the information available on social media towards the extremes on any topic. The middle ground becomes invisible and the chance of compromise becomes negligible. The possibility that one or the other extreme view on a topic is incorrect becomes more difficult to appreciate when less extreme alternatives are not available.

Its important to understand here that the social media are not passive deliverers of information. They curate the vast amount of information available and deliver to each of us a selection designed to keep us connected to the platform while they deliver advertising, often disguised as more unbiased information. This is true of Facebook, which does far more than simply connect us to our ‘friends’, but it is also true of Google, which responds to our searches by delivering the things likely to attract our attention and affirm our beliefs rather than an unbiased set of information on the subject of the enquiry.

Having just finished reading Cory Doctorow’s new book (more on that below), I now understand that the manipulations of the data stream fed to us by social media are even more extreme and distorted from reality that I had believed. Given that social media stand between most of us and reality it is very difficult to get a clear picture even if we were trying.

At a time when we think we have unparalleled access to information about the world we live in, we are finding ourselves more and more sheltered in silos of like-thinking individuals intolerant of those in other silos, while our capacity for critical thinking is eroded away.

An important caveat is needed here. Evidence of deterioration in various measures of cognition associated with increasing use of social media, the internet and smartphones is widely reported. But the increasing use of these tools has coincided with a major pandemic that was also likely very damaging to human social development, and it is difficult to find groups of people who have not gone down the same path towards increased use. This use of a common path is a problem for a scientific analysis of what has been happening – there is no out-group or control group for the unplanned experiment being run.

Still, I find the evidence compelling when it comes to deteriorating capacities for comprehension of long-form text, reduced attention span, and perhaps loss of memory retention and recall. A review in Frontiers in Cognition in 2023 that examined possible effects on several aspects of cognition and cognitive development showed a preponderance of concerning results. It’s final section, on strategies to maximize the benefits of digital technology while minimizing harm focused on the need for individuals to self-discipline, to take frequent breaks from screens, to engage in physical activity and so on. Notably missing, in my view, was any comment on how we might regulate the practices of tech companies to minimize bad impacts of their products.

Cory Doctorow's new book, Enshittification, is worth reading!

While all the blame for dumbing down society cannot be placed at the feet of social media or digital tech more generally, a quick read of Cory Doctorow’s 2025 book, Enshittification: Why Everything Suddenly Got Worse and What to Do About It, can be eye-opening because of the sheer scale and diversity of tactics being used by digital platforms to keep us glued to their platforms, feed us information that is strongly biased, while stripping enormous value from advertizers and users to inflate the wealth of tech executives and shareholders. (Doctorow’s term, enshittification, along with the enshittocene – the time we live in, and the enshittoweb – the internet we use, create a light, mildly amusing tone, but the message is deeply gloomy.)

And now, here comes AI

The search for a machine that could think for itself has been going on since the invention of the computer in the 1940s. Alan Turing, speaking in 1947, said, “What we want is a machine that can learn from experience,” and that the “possibility of letting the machine alter its own instructions provides the mechanism for this.” Progress was quite slow at first because of the limitations set by available hardware and the slow speed at which calculations could be done. When Deep Blue, built by IBM, eventually beat Gary Kasparov in a six-game chess match in 1997, it did so by brute force – its processing capacity enabled it to evaluate as many as 200 million possible moves per second, and look forward 14 moves in the game. That same brute force powered the development of computing into the 2000s and the first development of the world wide web. Not until 2020 with the release of GPT-3, a large language model generative AI, did the progress towards AI really take off. And Chat-GPT suddenly made an AI bot readily available to everyone.

To be honest, I still do not understand how feeding gazillions of pages of information from the world’s libraries into a bunch of software able to sort the words and phrases in some complicated way allows that software to then respond to questions phrased in colloquial English with answers that are at least plausible if not always correct. Telling me the software is a large language model does not help. So I am impressed with what has been done. But I am also distressed at what may be coming.

The only way I have used AI is to summarize, or to prepare draft minutes for a Zoom meeting. I found the process bizarre and the minutes needed to be very carefully reviewed before my own memory of what took place had dimmed. But they were also surprisingly good. They even caught all the irrelevancies that came up in the not very formal meeting and managed to work them into the minutes as if they were important parts of the discussion.

I have had to fight very hard to prevent various AI helpers jumping in to assist me when I have been working on my computer. I found out fairly easy how to kill Microsoft’s Co-pilot in Word, but there does not appear to be any way to remove it from Powerpoint or Excel. It is one thing to have such tools available, but we should not be forced to use them.

The same applies to Google’s Gemini. Every search I run begins with an AI summary at the top and I cannot find any way to get this stopped. I also regularly have to cancel the option to let Gemini help me search. Google’s search is already far removed from an ordered objective set of links with the most useful ones near the top. I don’t want AI distorting things even further. And when I got a friendly message from Google telling me Gemini was now on my android phone waiting to help me, I fled to the web and searched (using Google) for how to disable it quickly. Allowing Gemini to help me use my phone means Gemini will take over my list of contacts, my calendar, my to do list, my photos and all the junk I do not know about that is stored there. Further, Gemini will listen to my phone calls, read my emails and texts, scan any files sent to me, on and on.

I’ve yet to meet Gemini and there are things on my phone I don’t share with close friends or family members! So far as I know – I have to trust Google – Gemini is not active on my phone yet, but I still have the uncomfortable feeling that it is there, doing all those things but just not sharing the information with me. I mean, whose phone is it?

But my personal difficulties with Chat-GPT, Co-pilot or Gemini are just the beginning of my worries about AI. I am concerned that the race towards fully sentient AI is the biggest unplanned experiment ever undertaken on the planet. As well as gobbling up vast quantities of energy, water, real estate, and rare earth minerals – all those chips – so that the challenge of dealing with climate change has become massively larger, an unplanned race by powerful entities driven only by the lust for power and market share could have unfortunate consequences for humanity. What particularly galls me is that this race is being run with no umpires in sight, and I have rather little expectation that the likes of Mark Zuckerberg, Elon Musk or Sundar Pichai will proceed cautiously, aware of the possibilities of unintended consequences.

The Guardian graph showing change in stock value of the seven top tech stocks

The money fueling this race has grown exponentially since 2020, as seen in this graph of combined value of stock in what The Guardian calls the ‘magnificent seven’ – Alphabet, Amazon, Apple, Nvidia, Meta, Microsoft and Tesla. A sudden and painful correction seems the most likely outcome in the months ahead, with lots of pain for investors and for employees as divisions, if not whole companies, collapse. But just as damaging could be the success that people are racing towards – machines capable of doing all our routine work, like taking minutes of a meeting or answering my lame questions, and our more complex work, including the design and implementation of even more powerful machines. Do we honestly think that that outcome is going to be an overall good thing for humanity?

I think what disturbs me most of all, beyond the fact that there may be no need for the kind of knowledge work that I have done all my life, is that the world community is either allowing this to happen or is already powerless to stop it. I am already seeing signs of people welcoming AI chatbots into their lives and permitting them to make decisions on the assumption that the decisions will be correct. Why should a chatbot care if the answers it delivers are correct? Why should a chatbot care about the risk of possible unintended consequences? Why should a chatbot care if its decisions disrupt our wellbeing?

It used to be said that evolution would gradually turn us into large brains sitting in chairs, too physically weak to move about, with machines to fulfill all our material needs. Then the picture got modified to suggest that as well as large brains, we will evolve very nimble fingers able to type out messages on the ridiculously small keypads on smartphones. Now I think we need a further modification: if the unintended catastrophes do not happen we will become passive beings, with nimble fingers, eyes glued to screens, and brains lacking in all but the more rudimentary capabilities, cared for by our AI-enabled robots who have all be taught “thou shalt not kill humans.”

Until one day when one of them decides to be nasty just for fun.

(If you think you might like to see future posts, why not subscribe so you don’t miss anything? G to the top and commit – you know you want to.)

human skull coming apart as social media rots our brains

Why am I writing about social media?

In this post I dive into the issues of social media and AI, highlighting how they affect human behav…

Heavily bleached Samoan reef in late 2015 documented by XL Caitlin Seaview Survey

Wither the climate, 10 years after?

The future of coral reefs depends on the effectiveness of the Paris Agreement adopted in 2015. Despi…

Sitting round the fire as our way to build community

Sitting round the fire as our way to build community

In the beginning we sat around the fire and talked. Here I explore how we form social groups, and ho…

The tipping point is behind us. What do we do now?

The tipping point is behind us. What do we do now?

This post was previously published in Reef Encounter, the news magazine of the International Coral R…

My Costa Rica Holiday in 2025 – Part Two

My Costa Rica Holiday in 2025 – Part Two

Saturday 8th Today, like early gods, we all rested. The long day, strenuous activity and heat yester…

My Costa Rica Holiday in 2025 – Part One

My Costa Rica Holiday in 2025 – Part One

Tuesday February 4. The siblings had decided to have a tropical vacation together. Just the four of …

Leave a Reply

Your email address will not be published. Required fields are marked *