Disclosure: The views and opinions expressed here belong solely to the author and do not reflect the views and opinions of the crypto.news editorial team.
The speed at which AI grows beyond regulation poses risks to data, identity and reputational scrutiny and, if left unchecked, can increase the spread of misinformation and slow the progress of scientific innovation. The march toward superintelligent AI is portrayed by its most ardent leaders as a push toward a golden age of science. However, this push raises the existential risk that our society will reach a degrading technology plateau where the widespread adoption of immature AI technology sets limits and, over time, impairs human creativity and innovation.
For most accelerationists, this is a contradictory view. AI is intended to increase our ability to complete work faster and synthesize larger amounts of information. However, AI cannot replace inductive thinking or the experimental process. Today, anyone can use AI to create a scientific hypothesis and use it as input to create a scientific paper. The results of products like Aithor often appear authoritative at first glance and may even pass peer review. This is a big problem because AI-generated texts are already curated as legitimate scientific findings and often contain fake data to support their claims. There is a strong incentive for young researchers to use all the resources at their disposal to apply for a limited number of academic positions and funding opportunities. The current incentive system in science…