P11: Ethical Use of AI Learning Activities in a Master of Nursing Practice Degree to Extend Knowledge and Skill Development

P11: Ethical Use of AI Learning Activities in a Master of Nursing Practice Degree to Extend Knowledge and Skill Development

Digital Posters AAIN 2024 Conference

P11: Ethical Use of AI Learning Activities in a Master of Nursing Practice Degree to Extend Knowledge and Skill Development

Sarah Beasleigh, La Trobe University

In developing AI learning activities in nursing education, the focus should be on enhancing knowledge and skills, complementing traditional academic competencies rather than replacing them. Tertiary nursing courses should support students in learning AI tools and strengthening their digital literacy (O’Connor et al., 2023). In a graduate-entry Masters nursing program, AI activities were designed to build on students’ prior qualifications and extend their learning capabilities. These activities included defining and critiquing critical thinking and clinical reasoning, applying the clinical reasoning cycle to case studies using an AI chatbot, and appraising research through AI-assisted searches and data analysis.
The nursing profession must evaluate the benefits, limitations, and risks of AI in education (O’Connor et al., 2023) and ensure responsible use (Glauberman et al., 2023). Ethical use of AI, emphasising original thought, professionalism, and proper citation, is crucial to maintaining academic integrity (Alkhaquani, 2023). As course coordinator, my role was to promote ethical AI use, develop initial activities, support staff in further integration, and ensure the maintenance of academic integrity. Ongoing evaluation of AI learning activities is recommended to ensure they complement traditional academic skills and enhance student learning outcomes and maintain academic integrity.

References
Alkhaqani, A. L. (2023). ChatGPT and academic integrity in nursing and health sciences education. J Med Res Rev, 1(1), 5-8.
Glauberman, G., Ito-Fujita, A., Katz, S., & Callahan, J. (2023). Artificial Intelligence in Nursing Education: Opportunities and Challenges. Hawai’i journal of health & social welfare, 82(12), 302-305.
O’Connor, S., Leonowicz, E., Allen, B., & Denis-Lalonde, D. (2023). Artificial intelligence in nursing education 1: strengths and weaknesses. Nursing times (1987), 119(10), 23-26.
O’Connor, S., Permana, A. F., Neville, S., & Denis-Lalonde, D. (2023). Artificial intelligence in nursing education 2: opportunities and threats. Nursing times (1987), 119(11), 28-32.

Discussion starters:
  • What are challenges to ensuring that AI tools maintain academic integrity in your institution?
  • What has your experience been in using chatbots (ethically) in post-graduate coursework?
  • What are enablers and challengers to ensuring integrated AI contributes to effective learning in your program?
Watch this AAIN Poster presentation, then engage with the authors and other attendees before the forum day.
You can post questions and comments on the Padlet for this poster using the button below.
Go to the Padlet for this poster

View other posters from the AAIN 2024 conference

Contact us

Please get in touch if you have anything to ask or say about the AAIN.
We'd love to hear from you.

11 + 12 =

Membership

The Network acknowledges the support of Deakin University in developing and hosting this website.

Acknowledgement to Country

The AAIN recognises the First Peoples of our nations and their ongoing connection to culture and country. We acknowledge First Nations Peoples of our lands as the Traditional Owners, Custodians and Lore Keepers and pay respects to their Elders past, present and emerging.

P10: The odd couple? Gen AI and post-entry language assessment (PELA)

P10: The odd couple? Gen AI and post-entry language assessment (PELA)

Digital Posters AAIN 2024 Conference

P10: The odd couple? Gen AI and post-entry language assessment (PELA)

Dr Cameron Lydster, Bond University

Post-entry language assessments (PELAs) have been commonly used in the Australian and New Zealand university context to identify students who may require academic language and literacies support (for review, see Read, 2015). Many PELAs, such as Bond English Language Assessment (BELA; Lydster & Brown, 2017), often require students to submit a piece of academic writing and are self-administered by students online. This raises potential academic integrity concerns. In response, BELA-AI, a formative assessment item incorporating Generative AI, was developed for the Bachelor of Medicine program at Bond University. BELA-AI, a 60-minute assessment, contains three tasks: 1. Students write an academic essay, 2. Students generate an essay using Generative AI, and 3. Students write a brief critical reflection analysing the two essays. Initial discussions with raters (i.e. Bachelor of Medicine academic staff) indicate preliminary evidence that BELA-AI has reduced the academic integrity concerns associated with the previous version of the PELA. However, further research is required to confirm students are maintaining academic integrity and using Generative AI ethically. This case study presents BELA-AI and how it has been used as a “conversation starter” with Learning Advisors at Academic Skills Centre, to discuss aspects of academic writing including maintaining academic integrity. It is argued that Generative AI and language assessment are not such an odd pairing, and embracing the technology is encouraged.

Discussion starters:
  • Does your institution use a PELA? If so, has your institution adapted its PELA in the age of Gen AI? How?
  • Do you think BELA-AI can elicit students’ genuine academic writing ability?
  • BELA-AI is used in one degree program, is it scalable?
Watch this AAIN Poster presentation, then engage with the authors and other attendees before the forum day.
You can post questions and comments on the Padlet for this poster using the button below.
Go to the Padlet for this poster

View other posters from the AAIN 2024 conference

Contact us

Please get in touch if you have anything to ask or say about the AAIN.
We'd love to hear from you.

8 + 8 =

Membership

The Network acknowledges the support of Deakin University in developing and hosting this website.

Acknowledgement to Country

The AAIN recognises the First Peoples of our nations and their ongoing connection to culture and country. We acknowledge First Nations Peoples of our lands as the Traditional Owners, Custodians and Lore Keepers and pay respects to their Elders past, present and emerging.

P9: ChatGPT in Education: Bridging Integrity and Innovation in Tourism Studies

P9: ChatGPT in Education: Bridging Integrity and Innovation in Tourism Studies

Digital Posters AAIN 2024 Conference

P9: ChatGPT in Education: Bridging Integrity and Innovation in Tourism Studies

Dr Rawan Nimri; Dr. Elaine Yang, Griffith University
This study explores the impact of generative artificial intelligence (Gen AI) tools on academic integrity in undergraduate tourism courses. Using a duo ethnography approach, it investigates how two tourism educators incorporated Gen AI into their teaching and assessment approaches. Central to the study was a learning activity where students critically evaluated assessment examples generated by Gen AI. This practice enhanced students’ understanding of Gen AI’s strengths and limitations, improved their critical thinking skills, and fostered their ability to use Gen AI responsibly. The study outlines essential strategies to mitigate academic integrity risks associated with the misuse of Gen AI: providing clear expectations for Gen AI’s role in assessments, fostering critical AI literacy by educating students on AI’s potential and pitfalls, and effectively communicating institutional guidelines for responsible AI use. These strategies facilitate open discussions and collaborative assignment design, effectively balancing technological advancements with ethical considerations (Bond et al.,2024). The findings reveal that recognizing students’ varied responses to Gen AI tools encourages thoughtful integration of AI in educational practices. This research offers practical insights for educators navigating AI integration in academic settings and contributes to the broader conversation on AI’s role in higher education, emphasizing the importance of maintaining academic integrity while harnessing AI’s potential.
Discussion starters:
  • How can the practice of evaluating AI-generated exemplars help students develop critical thinking and analytical skills?
  • How does engaging with generative AI tools prepare students for future employment?
  • How might the integration of generative AI tools like ChatGPT evolve in higher education over the next decade?
Watch this AAIN Poster presentation, then engage with the authors and other attendees before the forum day.
You can post questions and comments on the Padlet for this poster using the button below.
Go to the Padlet for this poster

View other posters from the AAIN 2024 conference

Contact us

Please get in touch if you have anything to ask or say about the AAIN.
We'd love to hear from you.

14 + 15 =

Membership

The Network acknowledges the support of Deakin University in developing and hosting this website.

Acknowledgement to Country

The AAIN recognises the First Peoples of our nations and their ongoing connection to culture and country. We acknowledge First Nations Peoples of our lands as the Traditional Owners, Custodians and Lore Keepers and pay respects to their Elders past, present and emerging.

P8: Assessment validation through the lens of Gen AI

P8: Assessment validation through the lens of Gen AI

Digital Posters AAIN 2024 Conference

P8: Assessment validation through the lens of Gen AI

Chris Wong, Kaplan Business School, Australia

This presentation will showcase a new approach to minimising the unethical use of Gen AI in students’ assessment submissions and improving the quality of assessments by verifying existing assessment guidelines with Gen AI.
As AI-generated writing is observed in one undergraduate subject’s assessment submissions, a 3-step approach is established according to the assessment validation concepts and the effective validation guidelines from the Australian Skills Quality Authority (ASQA, n.d.; Miller et al., 2009).
Assessment validation is “a quality review process aimed to assist you as a provider to continuously improve your assessment processes and outcomes by identifying future improvements” (ASQA, n.d.). The aim of the proposed assessment validation approach is two-fold; to:

  • validate assessment content by understanding the accuracy and specificity of the answers produced by Gen AI
  • incorporate assessment guidelines and marking criteria that can minimise the unethical use of Gen AI.

The 3-step approach includes:

  1. Generate answers from Chat GPT with a range of prompts
  2. Identify the limitation of Chat GPT – Free versus subscription mode
    3. Create guidelines to facilitate problem-solving and critical thinking among students (Xia et al., 2024).
    This approach minimises the opportunity for academic integrity breaches and enhances students’ understanding of the GenAI output.

References

Australian Skills Quality Authority (ASQA). (n.d.). Spotlight on assessment validation, Chapter 1. https://www.asqa.gov.au/rto/focus-compliance/series-2-assessment-validation/chapter-1

Miller, M. D., Linn, R. L. & Gronlund N. E. (2009) Measurement and assessment in teaching (10th edition). Pearson.

Flinders University. (2024). Good practice guide – Designing assessment for Artificial Intelligence and academic integrity. https://staff.flinders.edu.au/learning-teaching/good-practice-guides/good-practice-guide—designing-assessment-for-artificial-intell

Xia, Q., Weng, X., Ouyang, F., Lin, T. J. & Chiu, T. K. F. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(40), pp.1-22. https://doi.org/10.1186/s41239-024-00468-z

Discussion starters:
  • What kind of prompts shall we create to generate relevant answers?
  • What is the process of the additional guideline compilation?
  • How should we decide the penalty if the student submission is similar to the Gen AI answers?
Watch this AAIN Poster presentation, then engage with the authors and other attendees before the forum day.
You can post questions and comments on the Padlet for this poster using the button below.
Go to the Padlet for this poster

View other posters from the AAIN 2024 conference

Contact us

Please get in touch if you have anything to ask or say about the AAIN.
We'd love to hear from you.

5 + 15 =

Membership

The Network acknowledges the support of Deakin University in developing and hosting this website.

Acknowledgement to Country

The AAIN recognises the First Peoples of our nations and their ongoing connection to culture and country. We acknowledge First Nations Peoples of our lands as the Traditional Owners, Custodians and Lore Keepers and pay respects to their Elders past, present and emerging.

P7: Manage cases of GenAI misuse – when is it a marking issue

P7: Manage cases of GenAI misuse – when is it a marking issue

Digital Posters AAIN 2024 Conference

P7: Manage cases of GenAI misuse – when is it a marking issue

Mingwei Sun and Jack Samson, Australian Institute of Business

Australian Institute of Business (AIB), as Australia’s largest online MBA provider, has witnessed a surge in the number of academic integrity investigations (at least doubled) thanks to the prevalent use of GenAI in student assessment submissions. Such academic integrity cases are new to everyone and involve far more complex elements, leading to an unprecedented challenge for the academic integrity team and resulted in delays and frustration in the investigation process (for both the AI team and subject coordinators). Through numerous trials and errors in the past two years, the academic integrity team of AIB has refined the processes and is willing to share key academic integrity investigation principles that would be of interest to other educational providers, especially in an online learning environment where all assessments are submitted online. Specifically, we would like to discuss the importance of the early categorisation of cases, which is to differentiate cases that are worth investigating further (AIO), from cases that can be resolved through marking and assessment design (subject coordinator). Our discussion will yield useful insights for process development as well as for assessment design, providing staff training insights for both Academic integrity staff and subject coordinators. Thinking this through will help simplify and speed up the academic integrity process in the GenAI age.

Discussion starters:
  • Do we need to spend resources to investigate ALL academic integrity cases that are flagged?
  • What information do you require from academics when they flag the cases?
  • What is clear evidence of GenAI misuse that guarantees a case to be further investigated?
  • What kind of evidence is insufficient? (hence to be resolved through making and assessment design).
Watch this AAIN Poster presentation, then engage with the authors and other attendees before the forum day.
You can post questions and comments on the Padlet for this poster using the button below.
Go to the Padlet for this poster

View other posters from the AAIN 2024 conference

Contact us

Please get in touch if you have anything to ask or say about the AAIN.
We'd love to hear from you.

1 + 15 =

Membership

The Network acknowledges the support of Deakin University in developing and hosting this website.

Acknowledgement to Country

The AAIN recognises the First Peoples of our nations and their ongoing connection to culture and country. We acknowledge First Nations Peoples of our lands as the Traditional Owners, Custodians and Lore Keepers and pay respects to their Elders past, present and emerging.

Loading...