Artificial Intelligence in Intelligence- the influence of AI in Spytech

Intelligence agencies are already using AI in ways big and little, to scan the news for dangerous developments, send alerts to ships about rapidly changing conditions, and speed up the regular works by a significant amount. But before the IC can use AI to its full potential, it must be hardened against attack. The humans who use it — analysts, policy-makers and leaders — must better understand how advanced AI systems reach their conclusions. They urged to use the technology to detect and block cyberattacks, analyse video and audio evidence, and automate administrative tasks. However, AI is unlikely to predict upcoming threats from potential criminals or terrorists and will not replace human judgement.

In a recent report published by the Royal United Services Institute (RUSI) think tank and commissioned by GCHQ, was supported access to top-secret British intelligence. While RUSI promoted the employment of AI across the UK’s national security, it also warned that the technology could raise additional privacy and human rights concerns. It said enhanced policy and guidance would be required to make sure these considerations were reviewed on an ongoing basis.

Government Communications Headquarters, commonly known as GCHQ, is an intelligence and security organisation responsible for providing signals intelligence and information assurance to the government and armed forces of the United Kingdom. © Flight Collection /TopFoto

Government Communications Headquarters, commonly known as GCHQ, is an intelligence and security organisation responsible for providing signals intelligence and information assurance to the government and armed forces of the United Kingdom. © Flight Collection /TopFoto


The report comes amid concerns that the United Kingdom faces national security threats from criminals using increasingly sophisticated methods. Researchers said malicious actors will “undoubtedly” attempt to use AI to attack the United Kingdom, while it had been likely that the majority of hostile states were developing or had already developed offensive AI-capable tactics. Potential threats to political security include the utilization of deep fake technology to spread disinformation, to control popular opinion or interfere in elections. “The modern-day ‘information overload’ is probably the best technical challenge facing the UK’s national security community,” the report stated. “The ongoing, exponential increase in digital data necessitates the utilization of more sophisticated analytical tools to effectively manage risk and proactively reply to emerging security threats.”
The UK could even be liable to so-called polymorphic malware that constantly mutates to evade detection and therefore the automation of social engineering attacks like phishing to focus on individuals. Even a much bigger problem is that humans generally don’t understand the processes by which very complex algorithms like deep learning systems and neural nets reach the determinations that they are doing. which will be a little concern for the commercial world, where the foremost important thing is that the ultimate output, not how it had been reached, but national security leaders who must defend their decisions to lawmakers, say opaque functioning isn’t ok to form war or peace decisions. Most neural nets with a high rate of accuracy aren't easily interpretable. There are individual research programs at places like DARPA to form neural nets more explainable. But it remains a key challenge. RUSI said threats to physical security were a less immediate concern, but warned the uptake of the web of things through connected cars and household devices will expose the country to more threats.
Andrew Tsonchev, director of technology at cybersecurity firm Darktrace, said AI would be key both for defending digital networks and boosting privacy. “Both government agencies and personal corporations use AI as a factual technology specifically to minimise the danger of breaches of privacy, the results of cyber-attacks, by detecting malicious activity within their systems,” he said. “This means there are fewer human eyes on data, and instead the computer algorithms can handle the method from the detection of an event through to its resolution autonomously. this can be a win for privacy and security.”

Thumbnail Credit: www.howitworksdaily.com