|
December 14, 2023

AI in Cybersecurity: Key insights from Glasswall's Security Data Scientist, Dr. Aqib Rashid

Q - Could you tell us a bit about your background and what got you interested in machine learning? 

A - My technology career began with a strong foundation in consultancy, software engineering and web development, further developed by several other industry positions. These experiences were both challenging and rewarding, providing me with a practical understanding of the industry. During this time, I’d become increasingly aware of the critical role of cybersecurity, especially in defense and government sectors. Concurrently, I observed the rise of machine learning (ML), a field that seemed extremely promising with potentially widespread practical applications. 

The intersection of these two fields piqued my interest. While pursuing a master’s in computer science, I wanted to explore a practical application of machine learning to cybersecurity. Therefore, I developed the world’s first web vulnerability scanner that could interact with websites in a way similar to a human penetration tester. With various AI and ML techniques, it could identify types of pages, understand context, create accounts, attempt logins, read emails, and of course produce a comprehensive analysis report profiling a website’s security. This project was not only an academic pursuit but also a potential stepping stone towards commercialization. 

However, during this period, the challenges of adversarial machine learning came to my attention. With adversarial ML, it was shown that attackers could manipulate ML systems to yield erroneous predictions. A notable example is the potential for ML systems in self-driving cars to be forced into misinterpreting their surroundings by attackers, posing significant safety risks. This crucial insight became a turning point for me, propelling me to pursue a PhD focused on enhancing the security of AI and ML systems. 

In my PhD, I explored how adversarial ML attacks impact ML-based malware detection systems and how we might defend against these attacks. This was an extremely challenging problem, situated at the critical intersection of two of the most pressing fields in modern technology. My research in this area led to several publications in top-tier cybersecurity and ML journals. Reflecting on this journey, it has profoundly strengthened my resolve to contribute to a safer and more secure digital world. 

Q - What are your key interests nowadays?

A - In this ever-evolving landscape, my current interest revolves around the further integration of AI and ML in the critical and dynamic field of cybersecurity. I'm particularly intrigued by the capabilities of large-language models (LLMs) and their applicability in this domain. These models represent a new, promising frontier of AI, and I'm keen on uncovering how they can enhance cybersecurity practices. 

My interest doesn't stop at merely applying machine learning to bolster cybersecurity defenses. I'm equally fascinated by the flip side of the coin: the security of machine learning systems themselves. It's a complex, twofold challenge. On one hand, we aim to fortify cybersecurity through AI and ML; on the other, we must protect these highly sophisticated systems from being compromised. 

Continuing in this direction, I'm dedicated to researching and developing methods to prevent attacks in the sphere of AI and ML as a whole. This involves not just technical proficiency but also a deep understanding of both cybersecurity and AI/ML paradigms. 

Q - It's interesting that you were an early pioneer in using AI to identify vulnerabilities in software or configurations. These types of issues tend to be due to the mistakes caused by the engineer. Talk to me about how useful AI can be useful in countering malware, which is intentionally dangerous. 

In my experience as an early pioneer in utilizing AI for identifying vulnerabilities in software or configurations, I've found that AI proves exceptionally valuable in countering intentionally dangerous malware. The detection capabilities and analysis offered by AI are remarkable, as it can discern patterns and signatures that might be overlooked by traditional methods, presenting a more effective way to identify anomalies within vast datasets. What sets AI apart is its adaptability; machine learning systems can evolve and adapt to recognize previously unknown or unseen threats, providing a crucial advantage in identifying zero-day vulnerabilities. 

Additionally, integrating AI systems with threat intelligence databases enhances defense capabilities, offering a comprehensive defense-in-depth strategy. One of the most intriguing aspects is the potential for self-remediation and self-healing—AI has the capability to not only detect malware issues but also autonomously address and remediate them, showcasing a proactive approach to cybersecurity.

Q - What are the challenges that relate to accurately detecting malware using AI?

Accurately detecting malware using AI poses several challenges that I've encountered in my work. First and foremost is the issue of data and labeling, where the collection of high-quality labeled data for training AI models proves to be a significant hurdle. Creating precise labels for malware samples is a time-consuming task that often requires human expertise. Another challenge arises from the distribution and imbalance of data; the unknown quantity of malware in different environments necessitates a nuanced understanding of how the model performs under varying conditions, requiring adjustments in training accordingly.

The ever-evolving nature of malware introduces the challenge of concept drift, requiring regular updates to AI systems to stay abreast of the latest threats. Adversarial attacks add another layer of complexity, as attackers intentionally seek to cause misclassifications in detection systems, undermining their effectiveness. Moreover, the complexity of many AI models used in malware detection leads to a lack of explainability, making it difficult to elucidate how decisions are reached. This lack of transparency becomes problematic when organizations need to justify actions or debug false positives and negatives. Striking the right balance in performance is a final challenge, as avoiding false positives is crucial; an AI system that misidentifies non-malicious entities as malware is ultimately not useful.

Q - Content Disarm and Reconstruction is what Glasswall is known for. How would combining CDR and AI help to improve security and usability?

Combining Content Disarm and Reconstruction (CDR) with AI holds great promise in elevating both security and usability, and this is precisely what Glasswall is known for. Embracing a defense-in-depth approach to cybersecurity is crucial, steering away from reliance on a singular solution. CDR operates on the principle of zero-trust file protection, removing all potential threats from a file. By integrating AI, we gain the ability to delve deeper into the detection process and achieve explainability. This means understanding precisely which elements of a file were identified as malicious, such as the detection of a malicious macro, and gaining insights into why these elements were removed.

Moreover, the collaboration of CDR and AI offers the potential to enhance content integrity by reducing errors in the reconstruction process, ensuring that the sanitized file maintains its original integrity. The synergy of these technologies also leads to improved reporting capabilities. AI can provide detailed reports and insights regarding the threats it identifies, offering a comprehensive view of the security landscape. This information is invaluable for organizations seeking a clear understanding of the safety of their systems and making informed decisions about their cybersecurity strategies. In essence, the amalgamation of CDR and AI not only fortifies defenses but also empowers users with a deeper understanding of potential threats and the actions taken to mitigate them.

Q - Recently the UK Government hosted the AI Safety Summit 2023, which sought to unite international governments, leading AI companies, civil society groups, and research experts to collectively examine the risks associated with AI, particularly at the frontier of development, and engage in discussions on strategies for mitigation through internationally coordinated action. What are your key takeaways? 

The summit focused primarily on the regulation and safety of AI. This emphasis was on control rather than a practical exploration of AI's safety capabilities. While I acknowledge the need for greater checks and balances for the safety of AI, I believe it's essential that these decisions are made carefully. A collaborative approach involving experts from both industry and academia is necessary to address these challenges effectively. Collaboration, however, must also be approached with caution. It's important to note that some industry figures, despite being regarded as experts, may have motivations such as 'being the first to market' that could potentially bias their perspectives. 

Overall, several key concerns stand out for me: the reliability and robustness of AI, the security measures to prevent misuse of AI, and the protection of sensitive data used to build AI. These issues are particularly pertinent these days, where there's a risk of privacy breaches and data leaks. Addressing these challenges is crucial for the responsible future, wider deployment of AI. 

Q - What are your personal visions for the future of AI?

When I think about the future of AI, I see it as an extension of our human capabilities, not just a standalone technology. It's about AI simplifying everyday tasks, offering expert advice, and working in harmony with us. Take ChatGPT as an example – it's just the beginning, showcasing how AI can handle a variety of tasks efficiently and make certain parts of life much easier. 

Moving forward, I envision the rise of more advanced AI assistants, leading us towards Artificial General Intelligence (AGI). This is where we approach the 'technological singularity' – a point where AI's capabilities could surpass human intelligence, unlocking a new era of possibilities. It's a future where AI not only assists but also inspires and innovates alongside us. 

This version encapsulates a personal vision for AI, emphasizing a future where AI enhances human life and leads to groundbreaking advancements. 

Book a demo

Talk to us about our industry-leading CDR solutions

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.