The Far-Right Eugenics Mindset Behind Musk’s Twitter Takeover
Dangerous fringe ideas from the 20th Century are being resurrected by today's tech billionaires, reports Nafeez Ahmed
Elon Musk’s Twitter gambit has grabbed the world’s attention. Yet few are still aware that this is only the latest episode in a wider drive by technology giants influenced by a fringe philosophy partly inspired by eugenics.
Since Musk’s takeover of Twitter – and the collapse of Sam Bankman-Fried’s FTX Cryptocurrency Exchange – there has been renewed attention on the influence of an obscure philosophy called ‘long-termism’ on their motivations.
Long-termism is not only influenced by far-right eugenics but, in turn, is influencing the billionaires involved in the biggest technology companies in the world including Facebook, Google, Amazon, Microsoft – and the Biden administration.
It emerged out of a broader moral philosophy called ‘effective altruism’ (EA), which aims to use reason and science to do the most good to the most people. For long-termists, EA implies giving priority to improving the long-term future because there will be extraordinarily large numbers of future people, they have moral worth, and the quality of their lives will be determined by choices we make today.
According to former long-termist Emile Torres of Leibniz University, many of long-termism’s most prominent advocates – such as Oxford University professors William MacAskill and Nick Bostrom – believe that all human efforts should focus on eliminating existential risks, while maximising the chance of proliferating the number of happy people across the universe. In the future, they believe, humans will have largely merged with machines. Stupendously massive numbers of these digital people will inhabit virtual-reality simulations powered by planet-sized computers, where they experience pleasurable activities.
Anything which jeopardises the immense ‘moral value’ represented by these trillions of unborn future happy people coming into being is seen as a moral catastrophe, and therefore an ‘existential risk’. Some long-termists believe that almost anything is justified to ensure that this future takes place.
MacAskill has argued, for instance, that escalating the environmental destruction of wild animals by human population growth may be a moral good because this reduces the suffering of wild animals whose lives are “worse than nothing”.
Bostrom, founding director of Oxford’s Future of Humanity Institute (FHI) which has received funding from Elon Musk (who has also endorsed his book Superintelligence), has called for total global surveillance by artificial intelligence to prevent human extinction. He recommends that every single person on Earth wears a ‘freedom tag’ with multi-dimensional cameras monitoring their every move. Bostrom has also advocated the need for pre-emptive war to avoid existential catastrophes while his ideas imply that the sacrifice of a billion lives today might be worth it if it improves the chances that 1054 people exist in the future.
Another long-termist, Bostrom’s FHI colleague Nick Beckstead, claims that saving the lives of more “economically productive” rich people is a greater priority than saving unproductive poor people, because this would accelerate “ripple effects” needed to increase chances of the desired future of massive space colonisation.
Underlying some of these views are highly questionable beliefs about human nature rooted in eugenics, the ideology of selective human breeding to improve the human species – which of course ultimately culminated in the advent of Nazism.
In 2013, Bostrom co-authored a paper with FHI research associate Carl Shulman essentially recommending eugenics to increase intelligence across society. Titled ‘Embryo Selection for Cognitive Enhancement’, the paper published in Global Policy – a political science journal not known for any expertise in natural or biological sciences – claimed grandly that “the extent of adoption of human genetic selection may significantly influence national competitiveness and global economic and scientific productivity in the second half of the century”. The paper recommended “embryo selection in the context of in vitro fertilisation (IVF)” to “open up new avenues to enhance human intellectual abilities genetically.”
Nick Bostrom and other long-termists disassociate these ideas from the ‘old’ eugenics of the late 19th and early 20th centuries, arguing that individuals should decide how to use biotechnology. But, according to University of California Berkeley legal scholar Nikila Lakshmanan, this disavowal is deceptive because ultimately they still believe we are morally obliged to use embryo selection to prevent the births of many people with conditions they consider to be disabilities, including cognitive inferiority.
This will, Bostrom and Shulman argue in their 2013 paper, “increase world human capital, and, in the case of IES [iterated embryo selection], possibly create individuals with unprecedented levels of cognitive capacity.” They therefore recommend that policy-makers explore “appropriate regulatory frameworks”, and even “a common set of global rules”.
Among Bostrom’s cited sources in his 2013 paper is theoretical physicist Stephen Hsu, who resigned from his director of research post at Michigan State University (MSU) after he was accused of being “a vocal scientific racist and eugenicist”. Bostrom also acknowledges Hsu’s input by way of comments on the manuscript.
Many of Hsu’s supporters during the MSU debacle were themselves eugenicists whose research was funded by the US-based Pioneer Fund – a Nazi eugenics foundation established in the 1930s. Hsu also has a longstanding friendship and connection with the far-right figure, antisemite and Holocaust denier Ronald Unz, whom he has platformed on his podcast.
In slide presentations about his work from 2012, the year before he advised Bostrom, Hsu approvingly quoted late British eugenicist Ronald Fisher, closing his slides with the following quotation: “…but such a race will inevitably arise in whatever country first sees the inheritance of mental characters elucidated.” Fisher is well-known for having supported voluntary sterilisation for the “feeble-minded”.
Hsu’s longstanding proposals for eugenics breeding using embryo selection to improve the overall IQ of the population are exactly what Bostrom and Shulman ended up promoting in their Global Policy paper. While there is no reason to suspect that Bostrum and Shulman agree with Hsu’s allegedly racist views and far-right affiliations, it is clear that Hsu’s ideas have influenced their long-termist vision.
“We are of course aware that parts of the field of behavioural genetics have a very dark past, but this is all the more reason to bring things to light so that society can deliberate and debate – and indeed that it is worth starting to do this before the technology is mature and ready for application (if that ever happens),” Nick Bostrom told Byline Supplement.
“We hope that many different perspectives will ultimately feed into that conversation, and that people will not pre-judge those who raise the issues, as we have, as being guilty by association.”
Many of the core principles and goals of long-termism have been taken-up by a tight-knit network of the world’s most powerful technology billionaires.
Bostrom’s FHI colleague Nick Beckstead – who also commented on his 2013 eugenics paper – was CEO of the FTX Foundation funded by FTX, until resigning after the latter’s collapse. He is also programme officer at Open Philanthropy, a grant-making research foundation which operates on the basis of effective altruism, set-up by billionaire Dustin Moskovitz who co-founded Facebook with Mark Zuckerberg. Carl Shulman, a frequent Bostrom co-author, administers a fund held by the Centre for Effective Altruism which is also funded by Open Philanthropy.
Open Philanthropy uses EA research methods to advise on grant-giving by Moskovitz’s Good Ventures, and pours millions of dollars of funding into long-termist projects.
Avril Haines, Director of National Intelligence in the Biden administration, previously consulted for Open Philanthropy. Jason Matheny, Joe Biden’s deputy assistant for technology and national security, is a former FHI research assistant who worked under Bostrom.
In the UK, effective altruism has influenced the Conservative Party through Dominic Cummings and Michael Gove. Cummings’s ideas about genes, IQ and eugenics have also been heavily influenced by Stephen Hsu, who has shaped Nick Bostrom’s thinking about embryo selection. It also appears to be influencing the Labour Party through a new pressure group called ‘Labour for the Long Term’.
In 2011, Moskovitz was among the early-funders of AI and robotics firm, Vicarious, which is trying to create machine learning software that can think and learn like a human, based on the computational principles of the human brain.
Moskovitz’s Good Ventures led Vicarious’ Series A funding round. Other funders included Elon Musk, Palantir co-founder Joe Lonsdale, Facebook CEO Mark Zuckerberg and Amazon executive chairman Jeff Bezos. In 2022, Vicarious was acquired by the owner of Google, Alphabet Inc.
In 2015, Google HQ hosted an EA Global conference at its Mountain View campus. Both Elon Musk and Nick Bostrom addressed the conference. Musk is close friends with Google’s co-founder Larry Page.
Lonsdale’s fellow Palantir co-founder, Peter Thiel, has spoken at EA Summit conferences in 2013 and 2014. He has also funded the Machine Intelligence Research Institute, which received support from Open Philanthropy under Nick Beckstead’s oversight.
And Vicarious investor Jeff Bezos has spoken about the inevitability of people colonising space, arguing that our solar system will support “a trillion people”, and that Earth will be preserved like “Yellowstone National Park”.
While there are disagreements among these billionaires, technology entrepreneurs and academics, they appear to share long-termism’s core principles and goals – and appear to be actively attempting to execute its vision on the world stage, with considerable success.
US investigative journalist Dave Troy has argued in Byline Times that Musk’s purchase of Twitter must be seen in this context. Control of our global information systems is a first step toward accelerating the drive toward space colonisation and the merging of people with machines. Anything which stands in the way of this vision can be seen as an existential catastrophe afflicting trillions of future unborn post humans.
That potentially includes democratic governments and progressives that waste resources on unproductive things like environmentalism, animal welfare and poor people – when we really need to focus on breeding the coming high-IQ master-race who will become the ancestors of trillions of future people, merging with the Metaverse, and flying to Mars.
And yet there is little evidence that any of this is actually achievable.
We have so little understanding of "behavioural genetics" or "the computational principles of the human brain" that this whole project is risible. A superficial appreciation of the complexities around epigenetics and the "hard problem" of consciousness is enough to see that.
The proposal that any sort of functioning human civilisation (particularly one requiring reliable electricity generation!) could survive on a planet with no other life-forms, or even no other animals, is ludicrous. The bacteria, viruses & cockroaches would have a field day. Maintaining a balanced ecosystem requires huge diversity and even the most advanced human intelligence is a long way short of being able to match the inherent management skills in that diversity.
The assumption that throwing money at engineering research will produce results on viable space-travel and long-term life-support systems in time to allow the extremely rich to evade the consequences of their irresponsible over-use of the planet's resources is equally foolish - clearly they have watched far too much Star Trek while indulging in psychoactive substances.
At the same time, these "educated idiots" are doing huge damage to society and to the planet. Quite how we put a stop to that, in time to prevent catastrophe, is beyond me... Let's hope the cockroaches, viruses & bacteria have a better grasp of the situation than me!
Pretty devastating. Just signed up for membership. Good journalism.