NewsHow AI Is Changing Battleground For Cybercriminals, Defenders – Ola Williams

How AI Is Changing Battleground For Cybercriminals, Defenders – Ola Williams

Ask ZiVA 728x90 Ads

March 12, (THEWILL) – The Country Manager of Microsoft Nigeria, Ola Williams, has pointed out how Artificial Intelligence (AI) has completely changed the battleground for both cyber criminals and defenders over the past few years.

While nefarious actors have found increasingly inventive ways to put AI to use, new research shows that AI is also modifying the abilities of security teams, transforming them into ‘super defenders’ who are faster and more effective than ever before.

The latest edition of Microsoft’s Cyber Signals research shows that, regardless of their expertise level, security analysts are around 44 percent more accurate and 26 percent faster when using Copilot for Security.

Glo

According to an analysis made by Williams, Deepfakes alone increased by tenfold over the past year, with the Sumsub Identity Fraud Report showing that the highest number of attacks were recorded in African countries such as South Africa and Nigeria.

“We’ve seen how these attacks, when successful, can have drastic financial implications for unsuspecting businesses. Just recently, an employee at a multinational firm was scammed into paying $25 million to a cybercriminal who used deepfake technology to pose as a coworker during a video conference call,” Williams stated.

According to her, the Cyber Signals report warns that these kinds of attacks are only going to become more sophisticated as AI evolves social engineering tactics.

“This is a particular concern for businesses operating in Africa, which is still a global cybercrime hotspot. While Nigeria and South Africa estimate annual losses to cybercrime of around $500 million and R2.2 billion respectively, Kenya experienced its highest-ever number of cyberattacks last year, recording a total of 860 million attacks. What’s more, understanding of deepfakes and how they operate is limited. A KnowBe4 survey of hundreds of employees across the continent revealed that 74 percent of participants were easily manipulated by a deepfake, believing the communication was authentic,” she added.

The country manager, however, listed ways to address the issue. Implementing these practices, she said, can help make sure we’re never compromised by “bringing a knife to a gunfight”. They include the following:

Launching an AI-powered defence.
Fortunately, AI can also be used to help companies disrupt fraud attempts. In fact, Microsoft records around 2.5 billion cloud-based, AI-driven detections every day.

AI-powered defence tactics can take multiple forms, such as AI-enabled threat detection to spot changes in how resources on the network are used or behavioural analytics to detect risky sign-ins and anomalous behaviour.

The use of AI assistants, which are integrated into internal engineering and operations infrastructure, can also play a significant role in helping to prevent incidents that could impact operations.

It’s critical, however, that these tools be used in conjunction with both a Zero Trust model and continued employee education and public awareness campaigns, which are needed to help combat social engineering attacks that prey on human error.

The number of phishing attacks detected across African countries increased significantly last year, with more than half of people surveyed in countries such as South Africa, Nigeria, Kenya and Morocco, saying that they generally trust emails from people they know. With AI in the hands of threat actors, there has been an influx of perfectly written emails that improve upon the obvious language and grammatical errors, which often reveal phishing attempts, making these attacks harder to detect.

History, however, has taught us that prevention is key to combating all cyber threats, whether traditional or AI-enabled. Beyond the use of tools like Copilot to enhance security posture, Microsoft’s Cyber Signals report offers four additional recommendations for local businesses looking to better defend themselves against the backdrop of a rapidly evolving cybersecurity landscape.

Adopt a zero-trust approach.
The key is to ensure the organisation’s data remains private and controlled from end to end. Conditional access policies can provide clear, self-deploying guidance to strengthen the organisation’s security posture, and will automatically protect tenants based on risk signals, licensing, and usage. These policies are customisable and will adapt to the changing cyber threat landscape.

Enabling multi-factor authentication for all users, especially for administrator functions, can also reduce the risk of account takeover by more than 99 percent.

Drive awareness among employees.
Aside from educating employees to recognise phishing emails and social engineering attacks, IT leaders can proactively share and amplify their organisations’ policies on the use and risks of AI. This includes specifying, which designated AI tools are approved for enterprise and providing points of contact for access and information. Proactive communications can help keep employees informed and empowered while reducing their risk of bringing unmanaged AI into contact with enterprise IT assets.

Apply vendor AI controls and continually evaluate access controls.
Through clear and open practices, IT leaders should assess all areas where AI can come in contact with their organisation’s data, including through third-party partners and suppliers. What’s more, anytime an enterprise introduces AI, the security team should assess the relevant vendors’ built-in features to ascertain the AI’s access to employees and teams using the technology. This will help to foster secure and compliant AI adoption. It’s also a good idea to bring cyber risk stakeholders across an organisation together to determine whether AI employee use cases and policies are adequate, or if they must change as objectives and learnings evolve.

Protect against prompt injections.
Finally, it’s important to implement strict input validation for user-provided prompts to AI. Context-aware filtering and output encoding can help prevent prompt manipulation. Cyber risk leaders should also regularly update and fine-tune large language models (LLMs) to improve the models’ understanding of malicious inputs and edge cases. This includes monitoring and logging LLM interactions to detect and analyse potential prompt injection attempts.

As we look to secure the future, we must ensure that we balance preparing securely for AI and leveraging its benefits because AI has the power to elevate human potential and solve some of our most serious challenges. While a more secure future with AI will require fundamental advances in software engineering, it will also require us to better understand the ways in which AI is fundamentally altering the battlefield for everyone.

About the Author

Homepage | Recent Posts

Anthony Awunor, is a business correspondent who holds a Bachelor of Arts Degree in Linguistics (UNILAG). He is also an alumnus of the Nigerian College of Aviation Technology (NCAT), Zaria Kaduna State. He lives in Lagos.

Anthony Awunor, THEWILLhttps://thewillnews.com
Anthony Awunor, is a business correspondent who holds a Bachelor of Arts Degree in Linguistics (UNILAG). He is also an alumnus of the Nigerian College of Aviation Technology (NCAT), Zaria Kaduna State. He lives in Lagos.

More like this
Related

Nagelsmann Bets On Youth In Germany Squad For Euro 2024 Build-Up

March 14, (THEWILL)- Six newcomers, including Bayern Munich teenager...

Rep Member Flags Off N140m Livestock Programme For 140 Youths In Borno

Marach 14, (THEWILL)- The National assembly member representing Jere...

Go Ahead To Inspect Kogi Guber Election Materials – Supreme Court Tells Ajaka

March 14, (THEWILL) - The Supreme Court has...