The path to self-regulation without political interference: 50 years ago, on February 24, 1975, a conference was held in California to develop rules for research in the "uncharted territories" of science. This set of rules was important against the background of the initial development of synthetic biology and nanotechnology.
Today, scientific self-regulation is relevant in the context of the development of achievements and threats based on artificial intelligence.
Transmits to UNN with reference to ANSA.
Details
The possibilities of genetic modification of bacteria, the development of medicines such as insulin, and at the same time the risk of potentially dangerous microorganisms: great opportunities and unknowns prompted the pioneers of genetic engineering research to meet 50 years ago, on February 24, 1975, in Asilomar, California, to establish a code of rules to guide research when exploring the "uncharted" and dangerous "territories" of science.
The established set of self-regulating rules helped open new frontiers in the study of synthetic biology and nanotechnology. They are now important in the context of what is happening with artificial intelligence.
What biologists did at that time is a good model to study
Just as today we think about the potential development of AI in many sectors of society, back in the early 1970s, DNA manipulation technologies opened up unexpected possibilities, according to ANSA.
The ability to transfer a fragment of DNA from one organism to another to study its functions "allowed anyone to do anything
The same concern was expressed in 1973 in a letter signed by several researchers and published in the journal Science.
In 1974, Berg and other pioneers of DNA research urged their colleagues to pause for thought and declare a moratorium on their experiments.
Until the potential dangers of these recombinant DNA molecules are better assessed or adequate methods are developed to prevent their spread
This was followed by a meeting of 150 researchers, representatives of government and corporate institutions, and 16 journalists in Asilomar in 1975.
Lively discussions that led to the development of safety rules that are still a model for the whole world.
Since then, Asilomar has become a symbol of the scientific community's ability to self-regulate without political interference and a model for many other frontier fields such as synthetic biology, nanotechnology, and artificial intelligence.
For example, on January 5, 2017, a conference on useful artificial intelligence was held in Asilomar, organized by the Future of Life Institute. In this case, many researchers from various disciplines such as economics, ethics, and philosophy also participated, although, unlike in 1975, many of the participants were representatives of private companies.
The Asilomar conference also influenced an open letter dated March 22, 2023, in which more than 30,000 people, including Elon Musk, expressed concerns about artificial intelligence systems "that could pose serious risks to society and humanity."
Recall
The Grok chatbot by xAI has stopped responding to requests for disinformation from Musk and Trump. The changes were made by a former employee without the approval of the company's management.
A study by Microsoft and Carnegie Mellon University has shown that overconfidence in AI impairs people's cognitive abilities. A survey of 319 IT professionals revealed a direct correlation between confidence in AI and a decrease in critical thinking.
