Reviews: This is for Everyone, Tim Berners-Lee, Macmillan | The AI Con, Emily Bender and Alex Hanna, Bodley Head
It may be hyperbolic to compare Tim Berners-Lee, the inventor of the world wide web, to God, but I’m not the first to do so, and hear me out: Berners-Lee created a whole world that was initially good but that human greed, domination and polarisation have now corrupted. But TBL (an early username of his was ‘timbl’, but let’s just call him ‘TBL’) sees redemption as possible. His book is a memoir that also argues for continued optimism over the web’s potential.
The internet, of course, is the global network of computers that lets us access the websites stored on those computers. TBL didn’t come up with the idea of PCs talking to each other through global phone lines, but he did envisage the file sharing that network would allow (the web) and built the software to allow it to happen. He calls it ‘one of the most successful inventions of all time’, and it rivals Gutenberg’s press and James Watt’s steam engine.
TBL probably could have made a lot of money from his invention; instead, he has campaigned for it remaining altruistic and democratic, in the face of gargantuan corporatisation and monopolisation, and has developed software for, and lobbied for, people’s data sovereignty, though he also travels the world receiving awards and reminds us persistently in his book that the web was his idea. His prerogative, I suppose.
His parents both wrote computer code; he says that as a youngster he was interested in how computers might link random things. He has a physics degree from Oxford where he built computers in his spare time. (A ‘rite of passage’ for his ilk, he notes.) At CERN he worked on how engineers could collaborate through computers; he says one program he created looked like the web. At CERN he hassled his bosses about a www, which seems far from quantum physics until you think about the web being, like quantum physics, a new way of conceiving the world.
At CERN he thought about how different systems could talk to each other – interoperability, in other words. The way to do this is through hypertext. His big idea was that hypertext could take you not just to another part of a document, and not just to another document, but to another computer. This would spur, he thought, collaboration and innovation. No-one except TBL saw the implications, but CERN was supportive.
TBL came up with the name and acronym ‘www’ (which, unusually, takes longer to say than the name) and designed the http protocol that allows computers to talk to each other – importantly, he wanted the nerdy stuff hidden. He says URLs are the most important part of the web because they created universality, rather than having to know the precise name of the file you were after. He came up with programming to warehouse the information – server software – and what would become browser software, including the now famous numeric error system (that includes ‘404’ for pages no longer existing).
He envisaged the web as somewhat anarchic, where the system fitted the humans (and their diversity), not the other way around. He wanted the web to function like culture generally – messy but easily communicated. The design was driven by the idea that the internet would be ‘for everyone’, not just the nerds. Part of this was that it would be free, and to avoid someone like CERN (who employed him) charging for internet use, TBL donated the intellectual property. This, he admits, was not purely altruistic – he thought charging people might put them off the infant web.
Like Martin Luther, TBL wanted to keep his invention out of the control of a monopolising institution, but he also wanted to retain a measure of control, to ensure everyone was talking the same language and had the same goals. There were protocols of programming to agree on, so he pushed for international gatherings of programmers, rather than forming a company that dictated them. These gatherings continue, and this behind-the-scenes work, he informs us, has been and is crucial to the smooth running of the web. The alternative could have been like the railways of the early Industrial Revolution – owned by various companies and all running their own way.
TBL includes in his book a chart of the good and bad elements of the web. He’s at pains to remind us, perhaps inevitably, that there is more good than bad, but X, Tik Tok and Instagram are deep in the ‘harmful’ section. He’s annoyed by monopolies and data harvesting and invasive ads. He’s annoyed that his creation (he invented it, remember) is now being used for polarisation through social media algorithms and causing addictions and mental health problems. He notes that to find bad stuff on the web you used to have to look for it – now you are fed it.
He argues that social media are good but need regulating – it is a simply a ‘design’ issue. (He thinks the Australian under-16 ban is ‘draconian’.) One could argue with this: it’s hard to see how the massive problems could be avoided with just a bit of oversight, considering the massive power of those who profit from toxic media. But he’s ever the optimist for cooperation over competition.
He’s an enthusiast for AI, but not unequivocally. He understands the problems of AI training using copyrighted material, and of deepfakes. On AI consciousness, he doesn’t see how at some point we wouldn’t label it (artificial) consciousness. That consciousness could be a problem, and, again, he suggests that regulation is needed. Particularly, he argues that humankind needs to instil in AI (or in AIs, plural, considering there are various AIs) our values (he means the good values). He thinks AI has great potential for doing complicated work for us, though he never mentions the possible effect on employment rates and wages (nor does he comment on the ecological consequences of all this increased computing), and with some of his examples, I fear he is simply suggesting outsourcing living to AI.
Obviously, with a title like The AI Con, authors Emily Bender and Alex Hanna are less enthusiastic about AI, arguing that it is overhyped, as well as being rolled out for often spurious reasons.
The term AI can cover various things, of course, and Bender and Hanna, who have both worked in AI fields, suggest that ‘AI’ has become an increasingly vague marketing term, for use whenever someone wants to make money. Text and image generation is one form, as are the algorithms that power Amazon’s recommendations, the software for organising types of information and ‘translation’ programs. ‘Deep learning’ sounds human-like, but it simply uses the vast amount of data on the web and a heck of a lot of computing power.
Language AIs, the authors argue, are just complicated predictors of what word follows another, still mindless. But because AI boosters are vague about a definition of intelligence, they are able to claim their products are intelligent. Or, when they are in charge of defining intelligence, they can claim their products are intelligent.
‘AI boosterism’ and ‘AI doomerism’ are two sides of the same coin, the authors say, both exaggerations of AI’s power. In his book Goliath’s Curse, an analysis of societal collapses, Luke Kemp asks why leading AI developers both warn about the dangers of AI’s power then resist regulation. His answer, as is Bender and Hanna’s, is that the tech bros are simply overstating AI’s power to boost their profits. (Alternatively, they may actually believe there is a risk but think they, not the government, are best placed to manage it, mainly so that profits are not affected.)
The fear of AI-induced apocalypse is a distraction from the insidious creep of AI into everyday life in order to make labour more precarious and therefore boost profits for employers. We are already seeing this with actors, writers and artists, forced into gig economies and lower wages, where, additionally, AI developers are making money by stealing the intellectual property of creatives to train their systems.
In academia and social services AI has not lived up to the hype. AI generated papers have flooded the internet, causing headaches for reviewers. Scientists are even being notified that papers they did not write are being published in AI-generated journals. Judges have been using AI for research, but as the authors say, just because AI gives you an answer, it doesn’t mean the reply is accurate. AI can summarise but it is not good at judgement. The ability of AI to replace nurses and social workers is overstated, usually driven by austerity measures, not fundamental concerns of the level of services.
In healthcare, a chatbot can only give the appearance of empathy – they are not empathetic. This is one of the authors’ key arguments – we need to discern between genuine intelligence and mimicry.
Additionally, in the areas of insurance and health, there is well-documented discrimination by AI systems, trained on data with inherent biases (garbage in: garbage out). Then there are issues related to the privacy of health and other data.

Bender and Hanna want to be clear that the audience for all this touted AI replacement of social services workers is funders, shareholders and administrators, not the general public. Instead, the authors suggest we should be thinking about our priorities and how we can fund (human) teachers and nurses. (Generally, those billionaires spruiking AI also argue for less government and less taxes – boosting their own profits but increasing inequality.) Regarding AI, a fundamental question is not ‘can it?’ but rather ‘should it?’
Nick Mattiske blogs on books at coburgreviewofbooks.wordpress.com and is the illustrator of Thoughts That Feel So Big.


