Artificial intelligence continues to make headlines as developers consider the technology’s potential to solve the most intractable technical problems with ease. But AI isn’t a magic cure-all for any problem – it’s a very specific set of tools for solving a specific set of problems.
For most executives and office managers without a deep background in AI technology and machine learning algorithms, drawing an analogy to managed IT services is helpful. While there are obvious, fundamental differences between the two, they have a surprising number of elements in common.
How AI is Similar to Managed IT Services
Today’s artificial intelligence is not at all like the sci-fi concept readers and moviegoers have been familiar with for almost a century. It’s simply a tool for accurately finding patterns in large data sets and using those patterns to solve complex problems.
This makes AI useful in a broad range of applications, from detecting cancer to reducing false alarms in a cybersecurity operations center. AI’s ability to scale its resources to respond to dynamic environments sets it apart from even the most sophisticated non-AI software.
But this also makes AI functionally similar to a managed IT services vendor. Managed IT (MIT) teams scale their resources to respond to dynamic environments and accurately find patterns in large data sets to solve complex problems.
This becomes evident when AI technology and MIT services intersect – such as in cybersecurity. Every cybersecurity vendor on the market wants to incorporate AI into its processes, and many are overeager to implement AI purely for marketing purposes. The global cybersecurity talent shortage makes developing AI a valuable decision for security vendors.
From the customer’s point of view, handing over cybersecurity concerns to an AI-powered security operations software or to a human-operated MIT vendor looks largely the same. In both cases, the customer entrusts the entire process – and all of their data – to a reliable third-party to resolve vulnerabilities and improve security performance.
In both cases, the fundamental process of choosing which events, transactions, and alerts are suspicious falls onto the outsourced security provider. The customer simply enjoys access to a reliable, secure network that accommodates its needs.
But this is where differences begin to arise. There is a key question where the capabilities of AI and cybersecurity often fail to intersect appropriately: Accountability.
The Black Box Problem
If artificial intelligence always made the correct predictions and never made mistakes, there might not be a need to consider the Black Box problem. But in the real world, artificial intelligence does make mistakes and AI programmers sometimes don’t know why.
Artificial intelligence works by breaking down complex sets of data into arbitrarily complicated algorithms adjusted to produce desirable results. The groundbreaking innovation is that AI software teaches itself to do this using whatever inputs its programmers give it.
The problem with this approach is that there is no way to see “inside” the algorithm that artificially intelligent programs develop for themselves. This creates a transparency problem that amplifies into a serious accountability problem when the AI makes mistakes in high-stakes applications.
Cybersecurity is a perfect example of a high-stakes application where entrusting the entire process to an intelligent – but ultimately unaccountable – the process can go wrong. Reputable cybersecurity vendors know this and are developing workarounds to the black box problem.
One such workaround involves “supervised learning”, which relies on human guidance during AI training to generate more reliable results. But this is not a true solution because there is still no technical method for reaching into the black box to see how the AI interprets guided training.
Furthermore, supervised learning comes with its own set of problems. If the objective of AI training is to develop a solution reliable enough to avoid the black box problem, ambitious cybercriminals could simply hack into the training session, input their own malicious code, tag it as “safe” and then enjoy a ready-made exploit for every one of the firm’s customers for years to come.
Managed IT Services Offer Accountable Security
Although AI and managed IT services do share many elements in common, the divergence over accountability is a deal-breaker for conscientious office managers. If AI-based cybersecurity firms are already depending on expert human guidance to address AI’s lack of accountability, then it falls on office managers to do the same.
The best cybersecurity solutions remain those where security personnel are accountable for their actions and can demonstrate that fact with detailed event logs. AI definitely has a role to play in reducing false positives, but it has not yet reached the level of sophistication necessary to be accountable for high-stakes decision making.
Implement the best network infrastructure solution and managed IT services for your organization with our help. Contact Image Net Consulting to get started!