AI functionality and Security
The AI capabilities within the platform have been carefully designed to support practitioners while ensuring they retain full control over decision-making. The system provides suggestions and guidance, but the practitioner remains the final authority on all outcomes.
About the Model
Where output is generated by AI, a large language model is used (LLM). The LLM technology provided by third-parties including, but not limited to, OpenAI and Microsoft.
Model Accuracy and Use
Imosphere has taken several steps to maximise the accuracy and reliability of AI-generated responses. However, as with any artificial intelligence system, outputs should be reviewed and validated by the user. Practitioners are expected to apply their professional judgement to all AI-assisted content. Imosphere does not guarantee the accuracy, completeness, or suitability of AI-generated output, and the customer remains responsible for determining its appropriateness for any intended use.
To help ensure quality and minimise the risk of hallucinations* , the following safeguards are in place:
- The model operates at a low temperature setting** to encourage focused, consistent responses.
- The model is required to provide a rationale for its responses, allowing users to understand and validate the reasoning behind suggestions.
- The system has undergone extensive testing using a representative set of anonymised EHCPs to ensure reliable performance in real-world scenarios. In any cases where our testing and quality assurance shows that the AI output does not meet required accuracy thresholds, the AI will not be enabled for that specific use case.
Model Hosting
The application leverages a Large Language Model (LLM) hosted within Microsoft Azure’s UK data centres. This model instance is private and dedicated solely to Imosphere - no data is shared with Microsoft or any other third parties. Input data is processed in-memory and discarded immediately after processing, ensuring that data is never retained by the model and remains entirely isolated from any model training activities.
Data Security
All data transmissions are encrypted using HTTPS with SHA-256 with RSA encryption, ensuring secure communication between systems. No data is stored or processed outside of Imosphere’s secure infrastructure. The data never leaves the UK.
To support response quality, accuracy, and traceability, short, anonymised snippets of the original document may be temporarily retained within Imosphere’s secure database for the duration of the case. These snippets are stripped of identifiable information and are used exclusively to support the case in progress. They are not accessible via API or through the application interface.
How We Handle Your Data
The application only keeps sensitive information about a child and their needs for as long as it’s needed to produce an assessment. There are two instances when the application asks for this data to be uploaded:
- When starting a new assessment
- When requesting a Funding Justification Report for a completed assessment
Starting a New Assessment
At the beginning of a new assessment, the application asks the user to upload the child’s Education, Health and Care Plan (EHCP). This document is securely stored and encrypted in a UK-based data centre. It’s used to help create the assessment.
As mentioned above, small snippets of anonymised results generated by the AI are retained. Once the assessment is complete, the EHCP and any related personal data are securely deleted.
To help verify any future uploads, a secure “fingerprint” (called a one-way cryptographic hash using SHA-256) of the original EHCP is retained. This fingerprint cannot be used to view or recreate the original document.
Requesting a Funding Justification Report
If the user later asks for a Funding Justification Report based on a completed assessment, they’ll be asked to re-upload the original EHCP. The application checks this document against the stored fingerprint to confirm it matches the original.
The user will also be asked to upload any new or updated information relevant to the child’s needs—these are referred to as “Supporting Documents”. While the application is generating the report, the EHCP and any Supporting Documents are stored securely in our UK-based data centres. Once the report is ready and sent to the user’s browser, this data is deleted from our systems.
Any AI-generated content related to the report is only retained while the report is being processed. After it’s been delivered to the user’s browser, the content remains there only until you end of the user’s session—at which point it’s removed.
We continue to store only the secure fingerprint of the original EHCP, which cannot be used to view the document.
* Hallucinations in large language models (LLMs) refer to instances where the model generates information that is factually incorrect or entirely made up, even though it may sound confident and plausible. This can pose risks where accuracy and trust are critical.
** A low temperature setting in AI (specifically in language models) reduces the randomness in the model’s responses. This means the AI is more likely to choose predictable, consistent answers rather than creative or varied ones. In business settings, using a low temperature can improve accuracy and reliability, especially when you need precise, fact-based, or repeatable outputs—like summaries, reports, or data extraction.