Thinking securely – Using the STRIDE framework. Article by Joshua Qwek MAICD

by

Thinking securely.

 

Being a pioneer and early adaptor to experimental and emerging technology is often accompanied by significant risks. The potential rewards are enormous, but the lack of proper due diligence in understanding the nature of the inherent cyber risks can lead to catastrophic impacts on the business’s viability.

Cybersecurity risk is becoming a more prevalent conversation from the boardroom to the lunchroom, thanks to several major and public cyber breaches in the last few years. Given the dynamic nature of the threat landscape, where unwavering adversaries continuously innovate and push their technical prowess to outsmart any controls, the importance of using a structured threat modelling approach cannot be understated.

 

Moving forward with STRIDE.

 

STRIDE provides a structured way for identifying and addressing security threats. As a threat modelling framework, STRIDE was developed in the 90s by Microsoft to help identify and classify potential threats in systems and software applications.

The acronym STRIDE stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. Each represents a potential threat that can impact the business operating environment due to cyber compromise.

Consider the area of telehealth, where Artificial Intelligence (AI) and Machine Learning (ML) have led to several significant improvements. The improvements include increased diagnostic accuracy of patient data and medical images, better prediction of health risks and suggestion of treatment plans, and enhanced interactions through AI-powered virtual assistants for routine enquiries.

One of the potential applications of AI is using Generative AI (GenAI) to provide initial patient consultation instead of traditional clinic-based care. A fundamental implementation comprises the following components:

 

  • A digital avatar powered by Natural Language Processing (NLP) Models capable of sentiment analysis and generating human-like responses,
  • Machine learning and deep learning infrastructure for the training and fine-tuning of models and
  • Interfaces and connectivity with electronic health records and knowledge systems.

 

Assessment of Generative AI for Telehealth.

 

Going through each element of the STRIDE, it is likely to deduce the following potential threats.

Spoofing attacks involve an attacker masquerading as another entity, such as a user, device, or system component, to gain unauthorised access. Traditional spoofing attacks target the patient, health care provider, or the platform. A successful attack can result in incorrect diagnoses, unauthorised access to patient records, or fraudulent billing.

A trusted telehealth system must also address potential spoofing attacks on the digital avatar. Deep fake technologies are rapidly maturing, and near-real-time motion capturing, face/audio cloning, and rogue avatars will become more prevalent.

Tampering is the unauthorised changes to the data or systems components. If electronic health records, prescriptions, or diagnostic data are maliciously changed, it can lead to wrong diagnoses and potentially fatal.

A unique tampering attack specific to GenAI-based solution is known as Large Language Model (LLM) poisoning, where malicious attackers inject corrupted or misleading data into the training data sets. The goal is to alter the model’s learning process and behaviour to generate biased, incorrect or harmful responses.

Repudiation refers to binding a responsible actor to a corresponding action. Repudiation in telemedicine could allow a party to deny having made a particular decision, diagnosis, or prescription, complicating accountability and legal compliance.

A trusted telehealth system must ensure that all interactions, decisions, and claims can be attributed and traceable while balancing the need to preserve consent and privacy.

Information Disclosure refers to the unauthorised release of sensitive information. Patient health information can be disclosed through breaches of health records, insecure communication channels or third-party mishandling.

Besides the need to ensure end-to-end security from training data to language models and health records and dealing with the increase of nonstructured data sets such as video/audio, there are additional information disclosure challenges for GenAI Telehealth.

Sensitive data used to train Large Language Models may persist within the model’s parameters and construct and cannot be anonymised entirely or redacted. Residual data patterns may potentially be exposed accidentally or reconstructed through inference attacks.

Elevation of privilege means gaining higher levels of privilege than they are authorised to have. An elevated privilege in health systems allows unauthorised users to access high-level functions, treatment protocols and overriding clinical decisions.

A novel approach to exploiting Generative AI (GenAI) telehealth systems targets the nature of large language models (LLMs). Without actual elevation of user roles, adversarial prompt engineering attacks modify the wordings and structure of prompts or tokens to circumvent a model’s filter. This manipulation can remove designed constraints such as ethical, privacy and regulatory compliance and allow attackers to extract sensitive or hidden information.

 

Takeaway & Conclusion.

 

STRIDE is one of many commonly used threat modelling frameworks. Other notable frameworks and their focus include Attack Tree (attack visualisation), MITRE ATT&CK (attack path), PASTA (impact simulation), and FAIR (quantitative risk prioritisation).

These frameworks can be categorised into qualitative or quantitative frameworks. Qualitative frameworks rely on subjective judgement, while quantitative frameworks use empirical data and mathematics models to assess and provide an objective, measurable analysis.

Some would assert that qualitative are less accurate as they are based on “gut feeling” or “opinions”. Proponents of quantitative frameworks also argue that qualitative frameworks are the more precise way of understanding the impact of risk using empirical data and often have more rigour in their approach.

The benefits of using a qualitative framework such as STRIDE are its low entry level for user adoption and quick benefits realisation. STRIDE provides a holistic understanding of possible threats to enable quick decisions. It is extremely useful in the early stages of the technology adoption lifecycle as it is more vital to identify potential threats rather than measuring them precisely.

Remember the wise words from Sheryl Sandberg (COO Meta): “Done is better than perfect”.

 

Joshua Qwek

7th Sep 2024

 

About the Author:

 

Joshua Qwek is an experienced cyber and technology leader living in Western Australia. He has over two decades of experience building resilience, increasing security visibility and managing risks across financial, insurance, manufacturing and government organisations.

He regularly shares his experience through speaking and mentoring at industry conferences, innovation events, and university meetups. He is also a member of the Australian Institute of Company Directors (MAICD) and an Honorary Life Member of the Australia Information Security Association (AISA).

 

You can contact Joshua at:

https://au.linkedin.com/in/joshuaqwek