We need to examine the beliefs of today’s tech luminaries

0
54
We need to examine the beliefs of today’s tech luminaries

The author is a science reviewer

Very rich or very smart people, or both, sometimes believe strange things. Some of these beliefs are embodied in the acronym Tescreal. The letters represent overlapping futurist philosophies — ending with transhumanism and long-termism — favored by many of AI’s wealthiest and most famous proponents.

The hashtag, coined by an ex-Google ethicist and philosopher, started circulating online and effectively explains why some techies want to see the public focus on nebulous future issues like existential risk, rather than algorithms and other current liabilities. bias. A fraternity ultimately committed to cultivating artificial intelligence for a post-human future may care little about the social injustice committed by their erring babies today.

Like transhumanism, which promotes human technology and biological augmentation, Tescreal also embraces anti-entropism, a belief that science and technology will lead to infinite life; Singularism, the view that artificial superintelligence will eventually surpass human intelligence; cosmic Rationalism, a manifesto that heals death and spreads outward into the universe; Rationalism, the firm belief that reason should be the supreme guiding principle of humanity; Effective Altruism, a social movement that calculates how to best benefit others; Long-termism is a type of utilitarianism In its radical form, it holds that we have a moral responsibility to those who do not yet exist, even at the expense of those who currently exist.

The acronym can be traced back to an unpublished paper by Timnit Gebru, the former co-head of AI ethics at Google, and Émile Torres, a Ph.D. student at Leibniz University. The first draft of the paper, which has not yet been submitted to the journal, argues that the unvetted AGI (Artificial Intelligence) race has produced “systems that harm marginalized groups and concentrate power, while using language of social justice and ‘good for humanity’,” akin to Eugenicists of the Twentieth Century”. The authors add that generic, undefined AGI cannot be properly tested for safety and therefore should not be built.

Gebru and Torres went on to explore the intellectual motivations behind AGI populations. “At the heart of this (Tescreal) bundle,” Torres detailed in an email to me, “is a techno-utopian vision of a future in which we become radically ‘augmented’, becoming immortal ‘Post-human’, colonizing the universe, redesigning entire galaxies (and) creating virtual reality worlds in which trillions of ‘digital beings’ exist”.

The interests of tech luminaries certainly overlap. Elon Musk, who wants to colonize Mars, is sympathetic to long-termist thinking and owns Neuralink, essentially a transhumanist company. PayPal co-founder Peter Thiel has backed anti-aging technology and funded rival Neuralink. Both Musk and Thiel have invested in ChatGPT creator OpenAI. Like Thiel, Ray Kurzweil, the singularist savior now employed by Google, wants to be cryogenically frozen and resurrected in a scientifically advanced future.

Another influential figure is the long-term thinker philosopher Nick Bostrom. He heads the Future of Humanity Institute at Oxford University, whose funders include Musk. (Bostrom recently apologized for a historic racist email.) The institute works closely with the Center for Effective Altruism, an Oxford-based charity. Some effective altruists already see a career in AI security as a no-brainer. After all, there’s no better way to do good than to save our species from a robot apocalypse.

Gebru and others have described such rhetoric as fear mongering and marketing hype. Many will try to dismiss her point—she was fired from Google after raising concerns about energy use and the societal harms associated with large language models—as sour grapes or an ideological rant. But that belies the motivations of those who run the AI ​​show, a dizzying corporate spectacle whose plot lines few can confidently follow, let alone police.

Repeated talk of a possible technological apocalypse not only sets up these tech elites as the guardians of humanity, it also implies that the path we are on is inevitable. It distracts from the real harms that are accumulating today, identified by scholars such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are depriving Black patients of priority in certain medical procedures, while generative AI is stealing manpower, spreading misinformation and putting jobs at risk.

Maybe these are plot twists we shouldn’t notice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here