Recently, a business school has been using AI doomerism, money from Saudi Arabia, and a dusty Cold War metaphor to get people hyped about AI’s future. This approach not only confuses people but also seems somewhat absurd.
First, let's talk about AI doomerism. Doomsayers love to use mysterious symbols and charts to depict the future, as if this makes the potential threats of AI more profound. But do these so-called 'Doomsday Clocks' really have that much impact? To me, they seem more like sensationalist tactics rather than genuine scientific analysis. Of course, AI does come with some risks, but we should discuss these issues with more rational and scientific methods, not by creating panic.
Second, the business school apparently received funding from Saudi Arabia. This raises questions about whether they are pushing a specific agenda with this money. After all, Saudi Arabia is not known for transparency and democracy on the international stage. Such funding sources could compromise the fairness and objectivity of research. We should be wary of potential conflicts of interest and ensure the independence and fairness of the research.
Finally, they used an outdated Cold War metaphor to describe the development of AI. The adversarial mindset of the Cold War era is obsolete, and now we need cooperation and mutual benefit. Using a Cold War metaphor to describe AI development only complicates the issue and makes it harder to solve. We should encourage more international cooperation to address the challenges and opportunities brought by AI.
In summary, the approach of using AI doomerism, Saudi funding, and Cold War metaphors to grab attention lacks scientific basis and may have negative consequences. We hope to see more rational, objective, and constructive discussions, not sensational gimmicks.