The use of artificial intelligence (AI) in organisations is increasingly widespread, thanks to its ability to increase efficiency, reduce the time that needs to be dedicated to more menial tasks, and rapidly solve complex problems.
But, as with any emerging technology, AI’s growth in popularity establishes a new attack surface for malicious actors to exploit, thereby introducing new risks and vulnerabilities to an increasingly complex computing landscape.
Organisations’ increasing use of AI systems for a range of activities (making better decisions, improving customer service, reducing costs, etc.) presents potential issues. AI relies on large amounts of data, which raises concerns about privacy and security, with poorly secured AI systems targets for data breaches, resulting in unauthorised access, theft, or misuse of sensitive information. There is also the risk that bad actors will disrupt AI applications, and with them, business operations.
Using generative AI in itself can also have data security implications; employees that are not fully aware of the security risks may feed company intellectual property into a public AI, inadvertently leaking confidential or sensitive material into the public domain.
Other risks include data manipulation to deceive the AI algorithm. Data poisoning, achieved by injecting malicious data into the model, can manipulate the outcome of machine learning models, so that AI will misclassify the data and take bad decisions.
AI can also be channelled by threat actors to carry out convincing social engineering attacks at scale. By learning to spot patterns in behaviour, it can understand how to convince people that a video, phone call or email is legitimate, and generate phishing content to persuade them to compromise networks and hand over sensitive data (at one stage, security experts noted AI-generated phishing emails had higher open rates than those manually crafted by humans).
Attacks such as a denial of service, malware, and ransomware are expected to get more sophisticated with the (malicious) use of AI technology. AI-powered malware for example can enhance evasion techniques, automate attacks, and even impersonate individuals.
And AI tools can be used to learn and generate malicious codes at a much faster rate using the generative AI capabilities of Chat GPT.
In addition, AI can be used to design malware that is constantly changing so that it avoids detection by automated defensive tools, while emerging AI techniques like deepfakes and voice overs create the possibility that image recognition and voice biometrics can be bypassed.
Protecting the increased attack surface created by AI requires several lines of action, including IT tools, changing employee habits, and data governance.
Technology such as Security Information and Event Management (SIEM) tools will enhance an organisation’s cyber security measures, while implementing initiatives such as ‘zero trust’ models, whereby network traffic and applications are consistently monitored to verify that they are not harmful, is general good security practice.
Due to the rapid adoption of AI, there is still a lack of national and global legislation, governance, and guidance on using it, particularly in a professional environment. And while common sense and a general understanding of IT security are good places to start, this is not sufficient to rely on. It is increasingly important therefore that organisations devote time and resource to developing internal use and misuse policies for employees using AI in the workplace so that information and integrity is protected.
These policies are only possible with a commitment to staying informed through ongoing research and continuous knowledge sharing; collaboration between AI experts and cybersecurity professionals at other organisations is vital for a comprehensive and proactive approach to identifying and mitigating AI-related risks.
Policies need to be reinforced with good training. This starts with regular sessions and skill development for cyber professionals on the current risks of AI and those that may emerge in the future. It is backed-up by role-appropriate education for employees throughout the organisation.
For all the discussion around the new security risks introduced by AI, its ability to help organisations secure their organisations from cyber attacks should not be forgotten.
Staff training on cyber security hygiene can be accelerated by using AI to generate training content, for example. Meanwhile, more and more defence software includes AI, with Microsoft planning a security copilot product that will assist defenders to take rapid action on security related issues.
AI can also play a significant role in penetration testing by automating certain tasks and helping testers identify vulnerabilities more quickly and accurately. Machine learning algorithms can be trained on large datasets to recognise patterns and identify potential vulnerabilities that may not be immediately obvious to human testers. This means AI can detect and respond to threats in real-time, as well as spot patterns that may indicate a potential attack.
There is a lot of understandable fear around the adoption of AI, and there is no doubt that it raises the risk level in many areas. Much of its capability is still unknown, but cyber security professionals need to adopt a balanced approach that sees them lay strong defence foundations by staying informed, adhering to good practice security principles, and implementing appropriate security measures.
Pixabay via Pexels
"*" indicates required fields
Software Asset Management is a business practice that involves managing and optimising the life cycle of software within an organisation.
Software asset management is relevant to many facets of a business - take a look at some of the roles that it can form part of the focus of.
Software vendors come in all shape and sizes - all with their own set of licensing models and rules. We take a look at just a few of them.
As a constantly evolving subject, SAM is not without its challenges. We take a look at some of the most common ones.
Wondering what an investment in SAM could do for your business? Fill out a few details and find out what return you could get!
Answer a few questions about your SAM infrastructure & experience, and we'll put together a personalised recommendation for the future.
A simple health check of what's being used across your Office 365 estate in this FREE, Microsoft backed and easy to setup review.
Just like you would with your vehicle each year, get an annual check up of your software asset management programme.
Overwhelmed by the task of documenting the steps for a successful SAM programme? Get the experts in to help!
Concerned your SAM tools aren't covering your whole estate? Or on the look out for an entirely new tool? Get us in to assist.
Not content with covering all things SAM related, we've teamed up with Capital to provide a comprehensive hardware asset management review.
A simple, one-time reconciliation of the software you have deployed versus the licence entitlement you own.
A regularly scheduled analysis of your organisation's estate, specifically adapted to your needs and budget.
A full appraisal of your Microsoft 365 setup and how best to optimise it through automated recommendations.
An add-on to our SAMplicity One, MOT and Plus offerings, quickly diagnose your ability to migrate your resources to the cloud.
In collaboration with law firm Addleshaw Goddard, ensure the legality of your SAM programme and get assistance with any contract disputes.
Available as standard with SAMplicity Plus, ensure you're compliant if you're unexpectedly audited by a vendor.
We've teamed up with some of the forefront experts in licensing knowledge so you can teach yourself to be an expert too.
Stumped by the continually evolving complexities of SAM? Join us for one of our comprehensive courses, either in-person or online.
It’s chock full of useful advice, exclusive events and interesting articles. Don’t miss out!