| Think of AI data management as organising a digital library. Instead of cataloging digital books or files, you are cataloging prompts and models. |
Set up your Prompt Management Table
Use a structured table (as in the AI-prompt-management-template below) with the following columns:
Date: Record when the prompt was used or modified.
Tool Used: Specify the AI tool or model version.
Prompt Version: Track changes and iterations.
Purpose/Context: Clearly state the research objective or context for using the AI tool.
Instruction: Document the exact prompt or instruction given to the AI.
Input Data: Describe the dataset or information provided to the AI.
Expected Output: Define what the ideal or intended outcome is.
Changes Made: Note any modifications to prompts, data, or methodology.
Results: Summarise the actual output or findings from the AI.
Issues Noted: Record any problems, inaccuracies, or unexpected results.
Next Steps: Outline actions for refinement, further testing, or follow-up analysis.
Define Evaluation Criteria
For each prompt and output, assess the following criteria:
| Criterion | What to Assess | Example Scale/Method |
| Accuracy | Factual correctness and reliability of the output: Does the response provide correct, trustworthy information? The output should be factually accurate and free from errors. | 1-5 scale (e.g., 5 = fully accurate, 1 = mostly inaccurate) or binary (Y/N) |
| Relevance | Alignment with the prompt’s intent and context: Does the response directly address the user’s request? | 1-5 scale or manual review, possibly using similarity scores |
| Context Match | Fit with the intended research scenario or use case: Does the output match the specific goals and requirements of the research task? Does it stay within the expected context and avoid including unrelated or off-topic information? | 1-5 scale |
| Meaning Similarity | Preservation of the prompt’s intended meaning: Does the output keep the main message and intent of the original prompt, even when the response is generated multiple times or in different ways? The core idea should remain unchanged across different outputs. | 1-5 scale or similarity score |
| Input-Output Match | Completeness and coverage: Does the response address every part of the prompt or question? The output should fully cover all the requirements or points mentioned in the input. | Checklist or 1-5 scale |
| Clarity | Readability, logical flow, and ease of understanding: Is the response easy to read and understand? The output should be well-organised, logically structured, and free from confusing language. | 1-5 scale |
| Specificity | Level of detail and avoidance of vagueness: Does the output provide enough specific information to fully answer the prompt, without being too general or unclear? The response should be detailed and avoid vague statements. | 1-5 scale |
| Consistency | Stability across repeated or similar prompts: If you ask the same or similar questions more than once, do you get consistent and reliable answers each time? The output should be reproducible and not vary without reason. | 1-5 scale or variance check |
| Context Fit Rate | Appropriateness for the research context: Is the response directly relevant to the research task, without including unnecessary or unrelated information? The output should stay focused on what matters for the research scenario. | 1-5 scale |
| Prompt Complexity | Balance between simplicity and detail: Is the prompt written clearly and simply, but with enough detail to get a thorough and focused response? The prompt should not be so simple that it leads to vague answers, or so complex that it causes confusion. | 1-5 scale |
AI Evaluation Using the Prompt Management Table
|
Date |
Prompt version |
Purpose /context |
Instruction |
Input data |
Expected output |
Changes made |
Results |
Issues noted |
Next steps |
|---|---|---|---|---|---|---|---|---|---|
| Record when the prompt was used or modified. | Specify the AI tool or model version. | Track changes and iterations. | Clearly state the research objective or context for using the AI tool. | Document the exact prompt or instruction given to the AI. | Describe the dataset or information provided to the AI. | Note any modifications to prompts, data, or methodology. | Summarise the actual output or findings from the AI. | Record any problems, inaccuracies, or unexpected results. | Outline actions for refinement, further testing, or follow-up analysis. |
| 01/02/2025 |
v1.0 |
Initial analysis of early-stage research support services offered at ECU |
Categorise the types of support offered by ECU in early-stage research. |
Data from surveys of researchers (100 responses) |
Categorisation of support services into types: funding, mentorship, infrastructure, etc. |
None |
Clear categorisation of services offered by ECU |
Some overlap in mentorship and collaboration services |
Refine definitions for mentorship vs collaboration support |
|
02/02/2025 |
v1.1 |
Refined analysis to include satisfaction ratings from researchers |
Analyse the satisfaction levels of researchers with the support services. |
Survey data with researcher satisfaction ratings (100 responses) |
Sentiment analysis of satisfaction levels (positive, neutral, negative) |
Added a filter to focus on specific services mentioned in responses |
High satisfaction in funding, low satisfaction in infrastructure |
Clarify survey question related to "infrastructure" in next survey |
|
|
03/02/2025 |
v1.2 |
Assessment of gaps in research support based on analysis |
Identify gaps or unmet needs in early-stage research support services. |
Combined data from previous survey and usage analysis |
Report highlighting gaps in available services, with recommendations |
New analysis added comparing services against researcher needs |
Highlighted a lack of training workshops for researchers |
Plan new service offerings to address training gaps |
Finalise report and prepare for dissemination |
Effective AI data management is crucial for several reasons:
| Enhanced Research Quality | Good AI prompt management leads to better research outcomes, whether in traditional or AI-based studies. Both AI-assisted and traditional research benefit from clear insights and the ability to validate and understand findings. Properly managed data ensures that research is reproducible and verifiable |
| Building Trust and Credibility | Well-organised data builds trust and credibility within the research community. It ensures that research findings are consistent and impactful, which is essential for advancing knowledge and innovation |
| Efficiency and Focus | Effective data management allows researchers to focus on analysis and interpretation rather than spending time on data organisation. This efficiency supports more in-depth and meaningful research |
| Supporting Fairness and Integrity | Proper data management practices ensure that research is conducted fairly and ethically. It helps in maintaining the integrity of the research process by preventing data manipulation and ensuring transparency |
| Facilitating Collaboration | Organised data makes it easier for researchers to collaborate. Shared datasets and models can be easily accessed and understood by different team members, fostering a collaborative research environment |
| Compliance and Security | Managing AI data properly ensures compliance with data protection regulations and standards. It also enhances data security, protecting sensitive information from breaches and unauthorised access |
| Document each AI interaction |
For every use of AI in your research:
Fill in the table with the prompt, context, tool used and expected output.
Record the actual output and note any changes made to prompts or methodology.
Evaluate the output against each criterion above, adding scores or comments in the “Results” or “Issues Noted” columns.
| Best Practices |
Be consistent: Use the same criteria and scales for each evaluation.
Encourage collaboration: Allow multiple team members to contribute to scoring and comments.
Track changes: Always update the “Prompt Version” and “Changes Made” fields for each iteration.
|
Ensure Data Quality |
Accuracy: Regularly clean and validate data to ensure it is accurate and free from errors Consistency: Standardise data formats and definitions to maintain consistency across datasets Completeness: Ensure that datasets are complete and contain all necessary information for analysis |
| Maintain Compliance |
Comply with regulations: Stay updated with data protection regulations Ethical Standards: Implement ethical guidelines to ensure data privacy and prevent data misuse Documentation: Keep thorough documentation of data sources, processing methods, and any measures |
| Lifelong Learning |
Continuous Improvement: Regularly update and refine data management practices to adapt to new technologies and methodologies Training and Development: Attend any training on the latest tools and best practices |
| Monitor and Evaluate |
Regular Audits: Conduct regular audits to identify and address any issues in data management Adaptability: Be prepared to adjust strategies based on evaluation results and emerging trends |
Edith Cowan University acknowledges and respects the Nyoongar people, who are
the traditional custodians of the land upon which its campuses stand and its programs
operate.
In particular ECU pays its respects to the Elders, past and present, of the Nyoongar
people, and embrace their culture, wisdom and knowledge.