Data privacy and security are crucial in AI research, especially when working with sensitive data such as personal health information or financial records. Researchers must implement rigorous data management practices to protect information from unauthorised access and misuse, or breaches, particularly before the data is input into AI tools.
For example, in the case of healthcare applications, managing health data and addressing the ethical implications of AI requires careful attention. Researchers must ensure informed consent, implement strict data sharing protocols, and ensure AI systems do not misuse or expose personal information.
To ensure sensitive data does not leak into AI tools, follow these best practices measures:
Only input the minimum amount of sensitive data required for the AI tool’s operation. From an ethical point, data minimisation helps prevent unnecessary exposure of personal data and minimises risks of misuse or discrimination.
Before inputting data into AI models, anonymise or de-identify it to remove personally identifiable information. This ensures that even if data is exposed, it cannot be linked back to individuals.
Ensure that the data input into AI tools is accurate, relevant, and up-to-date. Maintaining data integrity helps prevent biases and ensures that sensitive information is used correctly and responsibly in AI-driven research. Maintaining data integrity also involves documenting the source and history of the data (data provenance) to ensure its authenticity and reliability for AI-driven analysis.
Edith Cowan University acknowledges and respects the Nyoongar people, who are
the traditional custodians of the land upon which its campuses stand and its programs
operate.
In particular ECU pays its respects to the Elders, past and present, of the Nyoongar
people, and embrace their culture, wisdom and knowledge.