Blog

Geopolitical Monitoring Report | February 3, 2023

by | Feb 3, 2023 | Blog

Global

 

The Emerging AI “Arms Race” Set to Have Global Implications

 

Background:

OpenAI’s ChatGPT program is currently on track to reach 100 million active global users in only two months since its launch, achieving this landmark number far more rapidly than popular apps such as TikTok and Instagram. The success of the application has spawned a number of other companies around the world to begin their own AI projects.

For example, China’s Baidu has announced that they will be launching their own AI chatbot to compete with ChatGPT soon. There has also been a proliferation of other AI tools, such as those that are able to generate images based on written prompts and mimic voices.

The latter ability has already been a significant source of controversy after 4chan users began using a voice AI mimicry software to produce audio recordings of famous individuals engaging in hateful or lewd speech. Experts say that producing videos is the step that AI companies will be looking to achieve.

Impact:

The rapid proliferation of this technology will have a significant impact everywhere from the halls of national governments to local elementary schools and there will be significant debate to when the use of AI tools is appropriate.

For example, a Columbian judge has stoked significant controversy after he leveraged ChatGPT to assist his ruling in a case of determining whether the insurance company of an autistic child was liable to pay for his treatment. Governments, companies, educational institutions, and other organizations will all have to grapple with these questions, as this judge is likely not alone in using this technology for work-related purposes.

Another challenge posed by AI is the recent advancement in image and voice generation. This type of technology could be used to enhance the reach and impact of disinformation campaigns. It could also be used to incite violence against individuals or groups by faking audio recordings of them saying offensive or unseemly things. In addition, corporations could be impacted by faked audio of executives or other high ranking leaders giving bad news about their company  on earnings calls or other similar corporate meetings, which would obviously result in a real decline in stock prices.

There is also a direct cybersecurity risk posed by this technology. Researchers have already leveraged AI technology to create sophisticated malware, lowering the barrier of entry for aspiring hackers.

Mitigation:

Companies and organizations should be aware of these security threats associated with the rise and proliferation of AI tools and the impact they could have on their operations.

First and foremost, they should be exploring the creation of policies on the use of AI technology if there is a potential that it could be adopted by their employees. This will help them avoid potential reputational and regulator risk that could come from using this technology.

These policies should also consider the sources of data that they may be inputting into their AI to avoid ending up like the US-based company Replika, which was just banned from using the personal data of Italian citizens by their government. While this might be the first high profile example of AI running afoul of local regulations, it will not be the last.

The example of AI voice software being used to impersonate real people will likely be the most pressing external threat posed by this technology and highlights the need for investments in deep web monitoring by security teams. These technologies could be used to impersonate corporate executives in order to damage your company’s reputation or hurt its stock price. More broadly, threat actors could use this technology to incite riots or reduce confidence in local officials, creating a physical security threat for your organization.

Ensuring that your organization can identify these types of campaigns early is essential to mitigating their impact. The fact that this technology can be used to assist threat actors with creating new types of malware should raise significant concerns amongst cybersecurity teams. Companies should conduct cyber threat landscape assessments regularly to identify possible vulnerabilities and conduct regular external threat hunting to identify potential threat actors before they have the ability to impact your networks.

About Nisos®

Nisos is The Managed Intelligence Company®. Our services enable security, intelligence, and trust and safety teams to leverage a world-class intelligence capability tailored to their needs. We fuse robust data collection with a deep understanding of the adversarial mindset delivering smarter defense and more effective response against advanced cyber attacks, disinformation, and abuse of digital platforms.

Table of Contents

Europe

Background

Impact

Mitigation