Contrary to concerns about the goals and methods of using artificial intelligence at the CIA, according to Lakshmi Raman, director of AI at the U.S. Central Intelligence Agency - a thoughtful approach is being used with the participation of all stakeholders. Writes UNN citing TechCrunch and Es de Latino News.
Details
Lakshmi Raman, Director of AI at the CIA explained that generative AI is important for the intelligence organization in terms of dealing with the huge amount of data that needs to be classified and also assured that despite possible skepticism, the CIA has a thoughtful approach to artificial intelligence.
I would call it a thoughtful approach (to AI - ed.). I would say that the approach we are taking is that we want our users to understand as much as possible about the AI system they are using
Reference
It has been mentioned many times in the media that AI could be a technology “full of dangers.
There are many reasons for skepticism and concern about the CIA's use of artificial intelligence. In February 2022, Senators Ron Wyden (Democrat of Oregon) and Martin Heinrich (Democrat of New Mexico) reported in a public letter that the CIA, although it is generally prohibited from investigating Americans and U.S. companies, was collecting such information in a data warehouse.
Last year, a report by the Office of the Director of National Intelligence showed that U.S. intelligence agencies, including the CIA, were buying data on Americans through data brokers like LexisNexis and Sayari Analytics with little oversight.
If the CIA ever used artificial intelligence to analyze this data, many Americans would undoubtedly oppose it. It would be a clear violation of civil liberties and, because of the limited capabilities of AI, could lead to very unjust consequences.
Several studies have shown that crime prediction algorithms from companies such as Geolitica are easily influenced. As an example, the threat of disproportionate targeting of black communities is cited. Other studies show that facial recognition results in a higher rate of misidentification of black people than white people.
In addition, there is the observation that even today's best AI hallucinates or makes up facts and figures in response to queries.
Директор ЦРУ предупредил о риске поражения Украины до конца года19.04.2024, 11:15
However, Raman insists that the CIA not only abides by all US laws, but also “follows all ethical principles” and uses AI “in a way that mitigates bias”.
Developing responsible AI means we need the participation of all stakeholders; that means AI developers, that means our privacy and civil liberties office - emphasizes Lakshmi Raman.
Optional
Regardless of what an AI system is designed to do, it is important for system designers to clearly identify areas where it may fail. In a recent study, researchers from North Carolina State University found that AI tools, including facial recognition and gunshot wound detection algorithms, were being used by police officers who were not familiar with these technologies.
Google, OpenAI и другие создают коалицию за безопасный ИИ19.07.2024, 08:24
In a particularly egregious example of AI being abused by law enforcement, perhaps due to ignorance, the NYPD reportedly once used distorted photos of celebrities.
Any outputs generated by AI must be clearly understood by users, and this obviously means labeling AI-generated content and providing clear explanations of how AI systems work. Everything we do at the agency, we do in accordance with our legal requirements and ensure that our users, our partners and stakeholders are aware of all relevant laws, regulations and guidelines governing the use of our AI systems, and we comply with them.” with all these regulations
Скептики ИИ замолкнут: инвесторы намекают о впечатляющих доходах Nvidia19.07.2024, 17:38