Categories
SYNG

Response to Andreessen on “Why AI Will Save the World”

Read Andreessen’s article first.

As a curious designer, I read the article published on Andreessen Horowitz’s blog, “Why AI Will Save the World.” The piece is a compelling read, filled with optimism and a vision of a future where AI is the panacea for all our problems.

However, as someone who has spent years studying the intersection of design, technology, philosophy, psychology and humans in general, it’s essential to examine these claims and critically provide a more nuanced perspective.

Here we go. Strap in, baby!

Andreessen begins by defining AI as a tool that can be used to improve various aspects of our lives, from coding to medicine to the creative arts. It’s a definition I agree with. AI is indeed a tool, and like any tool, its impact depends on how it’s used. However, the article then leaps to the conclusion that AI will save the world, a claim that is as bold as it is unproven.

Slow down there, buddy.

Andreessen paints the human condition of irrationality as inappropriately lacking depth. Humans, unlike AI, are not purely rational beings. We are influenced by a wide range of factors, including emotions, biases, social pressures, and cognitive limitations, all of which can lead us to make decisions that are not strictly rational.

This irrationality is not necessarily a bad thing. It’s part of what makes us human. Our emotions, for example, can lead us to make decisions that enhance our well-being and happiness, even if they don’t maximize some objective measure of utility. While our biases can lead us astray, they can also serve as valuable heuristics, helping us make decisions quickly and efficiently in complex and uncertain situations.

However, our irrationality can also lead to problems. It can cause us to make poor decisions, to act against our own best interests, and to harm others. It can also make it difficult for us to understand and predict the behaviour of AI systems, which are designed to be rational and to optimize for specific goals.

AI, for all its potential benefits, cannot replicate the full range of human irrationality or emotions. It can simulate certain aspects of human behaviour, such as learning from experience and adapting to new situations, but it can’t truly experience emotions or biases. It can’t understand the full complexity of the human condition.

Okay, I’ll play along. In the next decade, maybe.

Andreessen argues that AI will augment human intelligence and improve outcomes in a wide range of fields. While it’s true that AI has the potential to enhance human capabilities, it’s also important to remember that AI is not a magic bullet. It’s a tool that can be used for good or ill, depending on the intentions of those who wield it.

Gun control, anyone?

The article paints a picture of a future where every child has an AI tutor, every person has an AI assistant, and every leader has an AI advisor. This vision is certainly appealing, but it overlooks several important considerations.

First, the assumption that AI will always act in the best interests of humans is flawed. AI systems are designed to optimize for specific goals, and humans set these goals. If the goals are misaligned with human values, the AI system can cause harm.

This is not a theoretical concern. We’ve already seen examples of AI systems causing harm because they optimize for the wrong goals, such as social media algorithms promoting divisive content to increase engagement.

Second, the article assumes that AI will be equally accessible to everyone. However, access to technology is often determined by socioeconomic factors. If AI tools are only available to the wealthy, they could exacerbate existing inequalities rather than alleviate them.

As of 2023, approximately 66.2% of the world’s population, or around 5.25 billion people, have access to and use the Internet. Therefore, it can be inferred that around 33.8% of the world’s population, or approximately 2.65 billion people, do not have access to the Internet​.

Third, the article dismisses concerns about AI as “moral panic.” While it’s true that new technologies often provoke fear and uncertainty, it’s also true that these technologies can have unintended consequences. The history of technology is littered with examples of innovations that were initially hailed as saviours but later revealed to have serious drawbacks. Consider the case of nuclear energy, which promised unlimited power but also brought the threat of atomic weapons and radioactive waste.

I can’t wait to watch Christopher Nolan’s Oppenheimer.

The article argues that AI will not “decide to murder the human race or otherwise ruin everything.” While I agree that fears of killer robots are largely unfounded, it’s important to recognize that AI systems can cause harm even if they don’t have malicious intentions. For example, an AI system that controls a power grid could cause a blackout if it makes a mistake, even if it doesn’t “want” to harm humans.

It could also very easily be a human error, I know.

Finally, the article claims that AI will make the world “warmer and nicer.” This is a bold claim and one that I believe is overly optimistic. AI systems can help us solve problems and improve our lives but they can’t replace human empathy, compassion, and understanding. AI can simulate these qualities, but it can’t truly experience them.

Now, the final straw that broke the camel’s back for me. Consider the source of these claims. The article was published by Andreessen Horowitz, a venture capital firm that has invested heavily in AI companies. As The Verge reported, Andreessen Horowitz has a history of hyping its investments and selling them after their valuations have ballooned.

This strategy can be profitable for the firm, even if the companies themselves don’t survive long term. It doesn’t take a rocket scientist to question the firm’s optimistic view of AI, which is influenced by its financial interests.

It’s in the firm’s interest to promote a positive view of AI, as this could increase the valuations of its AI investments and attract more investors to its funds. This doesn’t necessarily mean that the firm’s claims about AI are false, but it suggests they should be taken with a grain of salt.

Add pepper, too.

While I share Andreessen’s enthusiasm for the potential of AI, I believe it’s essential to approach this technology with a healthy dose of skepticism. AI is a powerful tool, but it’s not a panacea. It’s not going to save the world, but it’s not going to destroy it, either. Like any tool, AI’s impact will depend on how we use it.

We need to ensure that we use AI in ways that align with our values, that we make it accessible to everyone, and that we remain vigilant about its potential risks. We should also remember that while AI can augment our capabilities, it can’t replace our humanity.

AI is not a saviour or a destroyer. It’s a tool, and like any tool, it’s up to us to use it wisely.

By Paul Syng

Paul Syng is a multi-disciplinary designer based in Toronto. He focuses on a problem-seeking, systems thinking approach that can take any form or function.