a-balance-between-progress-and-security-50-years-ago-scientists-created-a-model-of-self-regulation-of-science-without-politicians

A balance between progress and security: 50 years ago, scientists created a model of self-regulation of science without politicians

 • 27073 переглядiв

The path to self-regulation without political interference: 50 years ago, on February 24, 1975, a conference was held in California to develop rules for research in the "uncharted territories" of science. This set of rules was important against the background of the initial development of synthetic biology and nanotechnology.

Today, scientific self-regulation is relevant in the context of the development of achievements and threats based on artificial intelligence.

Transmits to UNN with reference to ANSA.

Details

The possibilities of genetic modification of bacteria, the development of medicines such as insulin, and at the same time the risk of potentially dangerous microorganisms: great opportunities and unknowns prompted the pioneers of genetic engineering research to meet 50 years ago, on February 24, 1975, in Asilomar, California, to establish a code of rules to guide research when exploring the "uncharted" and dangerous "territories" of science.

Scientists have discovered: early childhood memories are likely stored in the brain for life16.02.25, 20:38 • [views_119239]

The established set of self-regulating rules helped open new frontiers in the study of synthetic biology and nanotechnology. They are now important in the context of what is happening with artificial intelligence.

What biologists did at that time is a good model to study

- one of the pioneers of artificial intelligence research, Nobel Prize winner Jeffrey Hinton, recently said.

Just as today we think about the potential development of AI in many sectors of society, back in the early 1970s, DNA manipulation technologies opened up unexpected possibilities, according to ANSA.

The ability to transfer a fragment of DNA from one organism to another to study its functions "allowed anyone to do anything

- noted  the pioneer of this technology, Nobel Prize winner Paul Berg.

The same concern was expressed in 1973 in a letter signed by several researchers and published in the journal Science.

Apple is preparing to launch a new AI in China: what is known about the secret project14.02.25, 06:05 • [views_45627]

In 1974, Berg and other pioneers of DNA research urged their colleagues to pause for thought and declare a moratorium on their experiments.

Until the potential dangers of these recombinant DNA molecules are better assessed or adequate methods are developed to prevent their spread

- experts warned. 

This was followed by a meeting of 150 researchers, representatives of government and corporate institutions, and 16 journalists in Asilomar in 1975.

Lively discussions that led to the development of safety rules that are still a model for the whole world. 

Since then, Asilomar has become a symbol of the scientific community's ability to self-regulate without political interference and a model for many other frontier fields such as synthetic biology, nanotechnology, and artificial intelligence.

For example, on January 5, 2017, a conference on useful artificial intelligence was held in Asilomar, organized by the Future of Life Institute. In this case, many researchers from various disciplines such as economics, ethics, and philosophy also participated, although, unlike in 1975, many of the participants were representatives of private companies.

The Asilomar conference also influenced an open letter dated March 22, 2023, in which more than 30,000 people, including Elon Musk, expressed concerns about artificial intelligence systems "that could pose serious risks to society and humanity."                       

Recall

The Grok chatbot by xAI has stopped responding to requests for disinformation from Musk and Trump. The changes were made by a former employee without the approval of the company's management.

A study by Microsoft and Carnegie Mellon University has shown that overconfidence in AI impairs people's cognitive abilities. A survey of 319 IT professionals revealed a direct correlation between confidence in AI and a decrease in critical thinking.

Popular
Denmark to reduce aid to Ukraine in 2026

 • 6312 переглядiв

Sweden stops aid to five countries and redirects money to support Ukraine

 • 4612 переглядiв

Netflix announced it is buying Warner Bros. and HBO

 • 16765 переглядiв

News by theme
Iceland to double defense support for Ukraine - Prime Minister

 • 32287 переглядiв

Traffic on the South Bridge may be partially allowed in the capital

 • 39707 переглядiв

Survey: 77% of EBA companies will stay in Ukraine despite the duration of the war

 • 26035 переглядiв

Norway announced new aid to Ukraine: 300 million euros will go to the energy sector

 • 26007 переглядiв

Britain announced record military aid to Ukraine and new sanctions against Russia

 • 108355 переглядiв

EU suspends transportation and energy sanctions against Syria

 • 21626 переглядiв

Danish Prime Minister reveals the easiest way to ensure Ukraine's security

 • 60908 переглядiв