Description
The Faculty of Arts Learning and Teaching Awards highlight and reward individuals and teams who make outstanding contributions to learning and teaching in the Faculty of Arts, going above and beyond in service of their students and the pursuit of excellence in their practice. Excellence is demonstrated through contributions that significantly surpass standard expectations, positively transforming student learning experiences and outcomes in unique or innovative ways.
Award text:
The landscape of higher education transformed on 4 February 2023 with the release of Microsoft’s AI- powered chatbot. In response, our interdisciplinary team has led FoA in establishing best practices for AI in humanities education, focusing on effective, appropriate, and ethical teaching and assessment techniques.
This team’s approach pioneers best practices for integrating Large Language Models (LLMs) into humanities and social-science curricula. Through critical prompting techniques and TEQSA-suggested authentic assessments, we equip students with essential digital literacy skills while maintaining academic integrity, setting benchmarks for AI-integration across disciplines.
Our initiatives showcase scalable best practices through key projects which reached more than 1,500 students. From AI-powered ‘clients’ in law to innovative language learning approaches, these projects demonstrate leadership in developing and integrating skills and assessments that will shape the future of AI-enhanced education across multiple disciplines. Through this work, we have continued to demonstrate excellence to our colleagues in practical and effective AI-enabled teaching techniques.
Criteria 1 and 2: Leadership and Integration
We demonstrated systematic leadership by pioneering the use of AI and integrating these digital literacy and employability skills within discipline-specific contexts, creating sustainable models for curriculum development beyond 2024. Through authentic assessments that combine disciplinary knowledge with emerging technologies, we are leading by establishing best practices that enhance both academic integrity and professional preparedness across FoA. These initiatives included:
1 AI-Powered ‘Client’ for Interactive Learning in Law (Head, Cam, Ballsun-Stanton)
An AI-powered ‘client’ was integrated into law tutorials, providing over one thousand students in 2024 with interactive, personalised, and situated learning experiences [1a]. Implemented in multiple units including Remedies and Torts, this scalable activity allowed for individualised, authentic client interactions. Students developed critical soft skills and legal problem-solving abilities through simulated individualised interviews. Prior tutorials relied on written hypotheticals, and students much prefer this new format [1b]. Student feedback was overwhelmingly positive, with sixty out of ninety-three surveyed students rating their engagement at the highest level, and fifty-four students indicating maximum motivation for active participation [1c]. This innovative tutorial exercise leads cross-faculty classroom engagement design [1d].
2 AI-Enhanced Activities in Political Science (Stolfi, Hawker, Ballsun-Stanton)
In 2023, POIR2070 implemented TEQSA-recommended authentic assessments by pioneering AI in studying real policy issues with 170 students. Students acted as policy entrepreneurs, completing two tasks: an email exchange and a stakeholder presentation, with encouraged LLM use. Feedback confirmed that this approach allowed students to ‘...really focus more on the aspects of good policymaking’ [2]. The AI-integration enhanced key employability skills; ‘[it was] unique and engaging in a very different way to the usual essay/exam structure’, and set them up ‘well for their future outside of uni’ [2].
3 AI Tools in German Studies (Revink, Ballsun-Stanton, Garde)
In GRMN1020 in 2023–24, students used AI for portfolio assessments, planning a fictional German exchange trip. This unit pioneered implementing TEQSA’s advice on Authentic Assessments as pedagogically relevant AI use [3]. Introductory workshops, reusable guidance videos, and in-class support helped students use AI for cultural research and A2-level language practice. Students reported breakthroughs, with one noting that AI-integration helped them overcome persistent language learning challenges [3].
4 Early Experiments in PICT2020 (Hurley, Ballsun-Stanton)
PICT2020 was among the first to integrate AI into the classroom in 2023. Of 176 survey respondents, 154 reported receiving ‘absolutely’ or ‘almost absolutely’ sufficient guidance on AI use [4]. Key aspects included comprehensive AI immersion, use, benefits, and pitfalls; AI-supported assessments including vivas, reflections, and ethical considerations. Students reported improved critical thinking and argument development, with one noting, ‘AI helped me understand which sections I was struggling in’ [4]. This pathfinder unit informed faculty-wide integration strategies and calibrated expectations for AI in classrooms.
5 AI-Driven Research and Assessment in ARTS3500 (Ballsun-Stanton, Laurence, Khalid)
ARTS3500 offered three experimental streams in Ancient History, Philosophy, and Politics and International Relations, each utilising AI in discipline-specific ways. Through peer mentoring and collaborative learning, twenty-seven students developed sophisticated approaches to AI-integration, with one student noting that ‘the collaborative sharing of prompts and insights... has been invaluable in identifying effective strategies’ [5a]. The integration extends beyond the unit itself, with Laurence planning to implement AI across multiple Ancient History units, reaching over 470 students from all levels [5c] and Jodie Torrington using observation of the unit to support Education’s AI use [5c]. Students recognised the broader implications, with one noting ‘ARTX3500 has enhanced my understanding of AI and its practical applications in both personal and academic contexts... it could be valuable as a mandatory module for all tertiary students’ [5a]. This work also has innovated designs for assessments that serve as models for future PACE research. Externally, the student-generated 158 source bibliography has already attracted online attention [5b].
6 Novel Knowledge Framework for Assessing Law Students with AI (Head, Willis)
A novel knowledge framework was developed for assessing law students in the era of AI, consisting of three pillars: Substantive Legal Knowledge, AI Ethics Knowledge, and AI System Knowledge. The framework, published in the International Journal of the Legal Profession5 and currently the 4th most viewed article of 2024, combines detailed analysis of existing AI literature with an ethics-approved case study [6a]. Implementation led to improved student outcomes, reduction in fail grades, and fewer late submissions. The framework is informing assessment design across the Faculty, with student reflections demonstrating enhanced critical engagement with AI tools in legal research and writing [6b].
7 LLM Workshops, Department Seminars, Professional Development (Ballsun-Stanton)
Thought leadership in the effective and critical use of LLM techniques has been central to this exercise. Beyond leading the efforts of academics on this team, Ballsun-Stanton has run 29 workshops and depart- ment seminars for academics in FoA, academics across campus, postgraduate students, the public, and internationally in Germany, Czechia, and the United States. His fortnightly professional development seminars mentoring colleagues in digital humanities have built sustained communities of practice, with participants noting that his ‘deep and up-to-date knowledge cuts through the hype and clarifies the potentials and limitations’ of AI tools [7]. This has led to invitations from IEEE Silicon Valley and multiple Australian universities to present and consult on the policy and practical pedagogy of AI, thereby scaling established knowledge and best-practices gleaned from the other initiatives in this application.
Evidence 1: AI-Powered 'Client' for Interactive Learning in Law (Head, Cam, Ballsun-Stanton)
Evidence 1a: Comments in LAWS5000 LEU Survey (Semester 1, 2024):
◦ ‘I liked the use of AI in one of the tutorials’
◦ ‘I really liked the AI client tutorial we did’
◦ ‘The AI client activity was fun and engaging’
◦ ‘I loved the AI exercise and it clearly connected with class content’
◦ ‘The AI client problem-solving activity was very engaging and a fun way to apply what we've been learning in a way that simulates, to some extent, real life clients’
◦ ‘I also appreciate the positive use of AI which made me very excited to be dealing with real-life clients’
Evidence 1b: Other Comments in LAWS5000 Survey:
◦ ‘I really enjoyed the experience because it felt like I was developing more skills than just learning to answer a written hypothetical. These skills included asking questions, being empathetic, other interpersonal skills, and understanding the client.’
◦ ‘It is helpful in enhancing students’ problem-solving skills by using limited information to determine the next best question to ask the client. I also think it would be a really helpful learning tool for students to use in their own time to practice for exams and to prepare students for real-life work.’
Evidence 1c: Quantitative Feedback from LAWS5000 Survey (n=93)
◦ 60/93 students rated engagement at the highest level (5/5)
◦ 54/93 rated impact on learning motivation at the highest level (5/5)
◦ Key student insight: ‘It was a lot of fun and insightful getting to use AI in a lawyer-client setting, and was extremely helpful in informing me of how being a real-world lawyer will be like’
Evidence 1d: Cross-Faculty
This successful model is now being adapted for implementation across additional law units and is informing the development of similar AI-enhanced experiential learning activities in the Faculty of Science and Engineering’s Learning and Teaching team.
Evidence 2: AI-Enhanced Activities in Political Science (Stolfi, Hawker, Ballsun-Stanton)
This unit also represented a 2023–approach to implementing TEQSA’s Authentic Assessments with AI. Student feedback included: ‘3 units at the end of the semester was well worth it. I strongly recommend that this is continued for the future of the unit. I also recommend the use of AI is continued in the unit as the use of AI allows students to more or less skip all of the time-consuming “grunt” work and instead really focus more on the aspects of good policy-making and demonstrate the extent to which they have understood the content being taught in the unit. Overall, a very unique unit but I very much welcome the variety and unusual assessment style.’ Multiple students reflected that this authentic assessment which encouraged the use of AI confronted them and their learning methods – the normal approaches to ‘just write an essay’ did not apply. This assessment framework also forced students to examine their assumptions about source quality and the factual accuracy of AI-generated text, all the while supporting them in conducting an authentic experiment. ‘[T]he new assessments I took on in S1 2024 were unique and engaging in a very different way to the usual essay/exam structure of POIR units. I feel the new version of the assessments set me up well for my future outside of uni.’
Evidence 3: AI Tools in German Studies (Revink, Ballsun-Stanton, Garde)
This unit was one of the earliest units to adopt TEQSA’s advice on Authentic Assessments as an empowering mechanism for AI use. Students in two years enjoyed the authenticity and simulated intercultural exchange. Moreover, students have reported multiple successful pedagogical outcomes. One student reported that this assessment combined with AI was the only way they were able to break the ‘error carried forward’ problems with their German language understanding. Another student spoke with their family about this portfolio task, and they were extremely positively surprised about a task like this being included in a German (language) class as it is a ”brilliant” way of getting further insights and learn about the (German) culture as well: 'My Aunt thought it was a great assignment. she thought it was good because it: ‘made me more familiar with German culture - cities, food, activities,
history and architecture.’ This semester’s class had an excellent engaging discussion on both sides of the utility of AI in language education, enabled by the development of the assessment and the teaching materials and videos by Revink and Ballsun-Stanton.
Evidence 4: Early Experiments in PICT2020 (Hurley, Ballsun-Stanton)
Of the 176 respondents to take the end-of-class survey, 154 ‘absolutely’ (77) or ‘almost absolutely’ (also 77) felt that they received sufficient guidance on using AI and LLMs in the unit. Selected responses include:
◦ ‘The support and guidance regarding the use of AI, and how LLMs work in general was extremely beneficial and throughout. [Brian’s] guest lecture provided an in-depth explanation of how these AI Chatbots actually work, and how to tailor them to suit your needs. This made me more comfortable in using them throughout the unit, as I had the tools and knowledge to make them work for me.’
◦ ‘I think was a great starting point to introducing AI and I look forward to seeing the possible future integration of AI into the university’s academic work.’
◦ ‘using AI helped me understand which sections i was struggling in and which sections i was good in. I am not good with coming up with arguments and ideas but once I was given examples by the AI, it made it easy for me to expand on the ideas. It made me realise I am decent at proving a point but not good at finding evidence to back that point.’,
◦ ‘I think that the use of AI in the unit is good because even if I never have the chance to use it again as a university student I can then go out into the workforce being confident in my abilities in using it if I am ever required to do so.’
This unit was the pathfinder that helped Ballsun-Stanton and Hurley provide support for the Department of Security Studies on appropriate levels of AI use in the classroom, and, more importantly, to calibrate expectations on how much additional training and lecture time would be needed to empower students to use these tools ethically and effectively in the classroom.
Evidence 5: AI-Driven Research and Assessment in ARTS3500 (Ballsun- Stanton, Laurence, Khalid)
The fundamental success of this unit is that over 10 of 27 undergraduate students volunteered to continue research after the semester ends, including those who are living internationally. Students, in vivas, have presented nuanced plans about how they plan to use LLMs as part of their work, communicate appropriate uses to employers, and how their judgement is a critical component of use. This unit has allowed academics in FoA learn how to teach the praxis and necessary judgement of LLM use for academic work.
Evidence 5a: Student Reflections on ARTS3500 Learning Experience
◦ Pedagogical Impact: ‘Throughout the course of the unit, there have been numerous personal and class wide instances of effective generative AI techniques and practices... my prompting process has drastically evolved since the beginning of the unit, and now mirrors that of an intuitive and well-experienced AI user who comprehends its skills and inadequacies.’
◦ Peer Learning Value: ‘The collaborative sharing of prompts and insights with the peers in my stream... has been invaluable in identifying effective strategies and areas for improvement.’
◦ Future Applications: ‘ARTX3500 has enhanced my understanding of artificial intelligence (AI) and its practical applications in both personal and academic contexts... suggesting it could be valuable as a mandatory module for all tertiary students.’
◦ Professional Development: ‘... growing need to teach students on how they can use AI effectively, while also emphasising it is a tool that should be used alongside their own work. AI is a great tool that is effective when used to boost a piece of work.’
Evidence 5b: Feedback from teachers around the world
When student work from this unit6 was promoted to academics and teachers online, responses included:
◦ ‘Wow this is a very useful resource’ Prof at Rutgers Uni and Macquarie alumnus
◦ ‘Say thanks to students who made this’ President of History Teachers’ Association of NSW
◦ ‘Interesting - and great for [HSC] Extension and use of AI technology too!’
◦ ‘This is absolutely thrilling: sound and good work, really the thing we need for questions that are often answered without any sense for nuance. Send my regards and congratulations to your students too.’ Professor Laes (Manchester)
Evidence 5c: Support from academics
Ray Laurence: ‘My involvement with ARTS3500 is leading to the following outcomes for my teaching and for my practice of research training for Graduate Research Students that will be enacted in 2025:
◦ AHIS1210/AHIX1210 Inclusion of instruction to students for use of LLMs to plan their second assessment. Include instruction in the use of Zotero – so that students are set up for the rest of their degree (ARTS3500 students had missed this and saw its value in hindsight)
◦ AHIS2225/AHIX2225 Instruction to students on use of LLMs to harvest, collate and catalogue ancient historical sources.
◦ Graduate Research Student Workshop/s Use of LLMs for scoping of research topics and formation of annotated bibliographies. Use of LLMs for the development of abstracts for conferences, thesis and publications, and Use of LLMs to plan publications in the context of Journal requirements.
This unit has provided me with knowledge around the effective (and useless) use of LLMs, as well as the confidence to inform students of their potential usages.’
Jodie Torrington: ‘Being involved in the experimental AI ARTS3500 class this semester has been extremely impactful on both my AI knowledge, understanding and efficacy. I have learned so much from Brian; importantly, changing my mindset around AI and the nuances involved in interacting with it. I have been able to translate the knowledge and understanding from Brian into the microcredentials I am developing for the MQ Teachers’ Learning Hub, and also for keynote speaking engagements including: the NSW Council of Deans, the MQ College, and the Sydney College of Divinity conferences. I have also created AI prompting insights, developed from Brian’s wisdom, that I have shared with the Macquarie School of Education staff.’
Evidence 6: Novel Knowledge Framework for Assessing Law Students with AI (Head, Willis)
Evidence 6a: Design Features
◦ Published via open access 19 July 2024 under latest articles. As of 30 October 2024, the paper has been viewed 656 times. Currently 4th most viewed article in the journal for 2024.
◦ Adopted by MQ Law School for best practice in AI-integrated assessment
◦ Assessment design incorporating the Knowledge Framework: Torts research paper on Climate Litigation. Key design features:
▪ It does not prohibit the use of AI but rather provides resources (including a lecture by convenor of the unit) on how to ethically engage with AI in the research process (GenAI System Knowledge, GenAI Ethics Knowledge).
▪ It requires engagement with up-to-date, authoritative sources on the topic – behind a paywall and very recent, so less likely to be in LLM training data (Substantive Legal Knowledge).
▪ It requires students to reflect on their own research process (which has strong a pedagogical basis (see for eg: Sheppick, C. (2024)7. while also making it difficult to rely solely on AI-generated content as the reflection is a check on main submission.
▪ It promotes critical thinking about the role of AI in research and writing and awareness of AI’s strengths and limitations in legal research (GenAI Ethics Knowledge).
Evidence 6b: Feedback
◦ Student reflection: ‘I adopted a dual approach... I would conduct my own research and then used AI as a secondary aid, relying on it for clarification and refinement rather than as a primary source’
◦ Anonymous reviewer: ‘Law teachers everywhere would benefit from the article... With one eye to the professional admission requirements, the authors use a first-year legal ethics course to demonstrate both the strengths and weaknesses of GenAI in terms of student learning in relation to both their substantive knowledge of legal ethics and their understanding of the pros and cons of the technology. Law teachers would be likely to be encouraged not only by the positive experience of learning that students acquired, in regard to both the substantive knowledge of the course and the technology, but also the fact that there was a decline in the percentage of fail grades in the case study... This timely article has the potential to inform law teachers not only of the ethical risks of GenAI, but also its positive value as a teaching tool for prospective lawyers’
◦ Colleagues: ‘Would love to chat with you and/Amanda at some point later in the session ... about advice for possibly rejigging second media law assessment. We cover AI in the media landscape so it could be a nice tie in.’
Evidence 7: LLM Workshops, Department Seminars, Professional Devel- opment (Ballsun-Stanton)
◦ Impact on Teaching Practice: ‘Brian's fortnightly AI group has given me a much deeper understanding of how AI may be used in supporting my own practice as a learning designer... Based on what I have learnt in this group, I have run a workshop for my own team on using Whisper AI for transcription.’ (Karen Woo, FSE L&T)
◦ On a LLM intro workshop: ‘Thank you so much for your efforts in making yesterday’s training happen. I feel much more confident trialling use of AI now. I got a lot out of it. [Thanks for] these further resources as I will do more exploring.’
◦ Feedback from staff in the chancellery on a policy workshop: ‘ Thanks for today’s session Brian, very informative.’ and, from a University Executive, ‘Thank you for your expertise, insights and frankness today Brian - that was an excellent session.’
◦ From the University of Tasmania: ‘We have had an overwhelming response to your events, which we have opened up to a wide range of disciplines (social sciences, medicine, engineering, education etc). ’
◦ Impact on AI understanding: ‘he has changed the way I think about AI and how it works by correcting the widespread use of “hallucinates” when it comes to AI (“Only humans hallucinate. AI confabulates”). Now I am always silently correcting people who talk about AI “hallucinations” ’
◦ Feedback on an infographic created with Jodie Torrington: ‘…we’ve received lots of very positive feedback about how helpful it was, and it was a great launching pad for our ongoing conversations around AI.’ (Michael Cavanagh) and ‘It will be a great help when using AI and I’d like to share it with my students. ’ (Constantine R. Campbell, Sydney College of Divinity)
1 This application was edited with the help of Claude 3.5 Sonnet.
2 LaTeX Word Count, 950 words, excluding headers (73 words)
3 A/Prof Ulrike Garde is presently at the University of Weimar, included due to her leadership at the time of unit creation.
4 Applicants in alphabetical order.
5 Head, A., & Willis, S. (2024). Assessing law students in a GenAI world to create knowledgeable future lawyers. International Journal of the Legal Profession, 1–18. https://doi.org/10.1080/09695958.2024.2379785
6 Green, G., Rodgers, J., & Tainsh, C. (2024). Caligula’s Madness, an Annotated Bibliography: 1856–2024 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.13999404.
7 Sheppick, C. (2024). Unveiling the benefits of reflective learning in professional legal practice. International Journal of the Legal Profession, 31(2), 207–221. https://doi.org/10.1080/09695958.2024.2345924.
Award text:
Pioneering AI-Enhanced Learning Across Multiple Disciplines in the Faculty of Arts
Educational Leadership Award, Criteria of Leadership and Integration 1,2,3,4
The landscape of higher education transformed on 4 February 2023 with the release of Microsoft’s AI- powered chatbot. In response, our interdisciplinary team has led FoA in establishing best practices for AI in humanities education, focusing on effective, appropriate, and ethical teaching and assessment techniques.
This team’s approach pioneers best practices for integrating Large Language Models (LLMs) into humanities and social-science curricula. Through critical prompting techniques and TEQSA-suggested authentic assessments, we equip students with essential digital literacy skills while maintaining academic integrity, setting benchmarks for AI-integration across disciplines.
Our initiatives showcase scalable best practices through key projects which reached more than 1,500 students. From AI-powered ‘clients’ in law to innovative language learning approaches, these projects demonstrate leadership in developing and integrating skills and assessments that will shape the future of AI-enhanced education across multiple disciplines. Through this work, we have continued to demonstrate excellence to our colleagues in practical and effective AI-enabled teaching techniques.
Criteria 1 and 2: Leadership and Integration
We demonstrated systematic leadership by pioneering the use of AI and integrating these digital literacy and employability skills within discipline-specific contexts, creating sustainable models for curriculum development beyond 2024. Through authentic assessments that combine disciplinary knowledge with emerging technologies, we are leading by establishing best practices that enhance both academic integrity and professional preparedness across FoA. These initiatives included:
1 AI-Powered ‘Client’ for Interactive Learning in Law (Head, Cam, Ballsun-Stanton)
An AI-powered ‘client’ was integrated into law tutorials, providing over one thousand students in 2024 with interactive, personalised, and situated learning experiences [1a]. Implemented in multiple units including Remedies and Torts, this scalable activity allowed for individualised, authentic client interactions. Students developed critical soft skills and legal problem-solving abilities through simulated individualised interviews. Prior tutorials relied on written hypotheticals, and students much prefer this new format [1b]. Student feedback was overwhelmingly positive, with sixty out of ninety-three surveyed students rating their engagement at the highest level, and fifty-four students indicating maximum motivation for active participation [1c]. This innovative tutorial exercise leads cross-faculty classroom engagement design [1d].
2 AI-Enhanced Activities in Political Science (Stolfi, Hawker, Ballsun-Stanton)
In 2023, POIR2070 implemented TEQSA-recommended authentic assessments by pioneering AI in studying real policy issues with 170 students. Students acted as policy entrepreneurs, completing two tasks: an email exchange and a stakeholder presentation, with encouraged LLM use. Feedback confirmed that this approach allowed students to ‘...really focus more on the aspects of good policymaking’ [2]. The AI-integration enhanced key employability skills; ‘[it was] unique and engaging in a very different way to the usual essay/exam structure’, and set them up ‘well for their future outside of uni’ [2].
3 AI Tools in German Studies (Revink, Ballsun-Stanton, Garde)
In GRMN1020 in 2023–24, students used AI for portfolio assessments, planning a fictional German exchange trip. This unit pioneered implementing TEQSA’s advice on Authentic Assessments as pedagogically relevant AI use [3]. Introductory workshops, reusable guidance videos, and in-class support helped students use AI for cultural research and A2-level language practice. Students reported breakthroughs, with one noting that AI-integration helped them overcome persistent language learning challenges [3].
4 Early Experiments in PICT2020 (Hurley, Ballsun-Stanton)
PICT2020 was among the first to integrate AI into the classroom in 2023. Of 176 survey respondents, 154 reported receiving ‘absolutely’ or ‘almost absolutely’ sufficient guidance on AI use [4]. Key aspects included comprehensive AI immersion, use, benefits, and pitfalls; AI-supported assessments including vivas, reflections, and ethical considerations. Students reported improved critical thinking and argument development, with one noting, ‘AI helped me understand which sections I was struggling in’ [4]. This pathfinder unit informed faculty-wide integration strategies and calibrated expectations for AI in classrooms.
5 AI-Driven Research and Assessment in ARTS3500 (Ballsun-Stanton, Laurence, Khalid)
ARTS3500 offered three experimental streams in Ancient History, Philosophy, and Politics and International Relations, each utilising AI in discipline-specific ways. Through peer mentoring and collaborative learning, twenty-seven students developed sophisticated approaches to AI-integration, with one student noting that ‘the collaborative sharing of prompts and insights... has been invaluable in identifying effective strategies’ [5a]. The integration extends beyond the unit itself, with Laurence planning to implement AI across multiple Ancient History units, reaching over 470 students from all levels [5c] and Jodie Torrington using observation of the unit to support Education’s AI use [5c]. Students recognised the broader implications, with one noting ‘ARTX3500 has enhanced my understanding of AI and its practical applications in both personal and academic contexts... it could be valuable as a mandatory module for all tertiary students’ [5a]. This work also has innovated designs for assessments that serve as models for future PACE research. Externally, the student-generated 158 source bibliography has already attracted online attention [5b].
6 Novel Knowledge Framework for Assessing Law Students with AI (Head, Willis)
A novel knowledge framework was developed for assessing law students in the era of AI, consisting of three pillars: Substantive Legal Knowledge, AI Ethics Knowledge, and AI System Knowledge. The framework, published in the International Journal of the Legal Profession5 and currently the 4th most viewed article of 2024, combines detailed analysis of existing AI literature with an ethics-approved case study [6a]. Implementation led to improved student outcomes, reduction in fail grades, and fewer late submissions. The framework is informing assessment design across the Faculty, with student reflections demonstrating enhanced critical engagement with AI tools in legal research and writing [6b].
7 LLM Workshops, Department Seminars, Professional Development (Ballsun-Stanton)
Thought leadership in the effective and critical use of LLM techniques has been central to this exercise. Beyond leading the efforts of academics on this team, Ballsun-Stanton has run 29 workshops and depart- ment seminars for academics in FoA, academics across campus, postgraduate students, the public, and internationally in Germany, Czechia, and the United States. His fortnightly professional development seminars mentoring colleagues in digital humanities have built sustained communities of practice, with participants noting that his ‘deep and up-to-date knowledge cuts through the hype and clarifies the potentials and limitations’ of AI tools [7]. This has led to invitations from IEEE Silicon Valley and multiple Australian universities to present and consult on the policy and practical pedagogy of AI, thereby scaling established knowledge and best-practices gleaned from the other initiatives in this application.
Evidence 1: AI-Powered 'Client' for Interactive Learning in Law (Head, Cam, Ballsun-Stanton)
Evidence 1a: Comments in LAWS5000 LEU Survey (Semester 1, 2024):
◦ ‘I liked the use of AI in one of the tutorials’
◦ ‘I really liked the AI client tutorial we did’
◦ ‘The AI client activity was fun and engaging’
◦ ‘I loved the AI exercise and it clearly connected with class content’
◦ ‘The AI client problem-solving activity was very engaging and a fun way to apply what we've been learning in a way that simulates, to some extent, real life clients’
◦ ‘I also appreciate the positive use of AI which made me very excited to be dealing with real-life clients’
Evidence 1b: Other Comments in LAWS5000 Survey:
◦ ‘I really enjoyed the experience because it felt like I was developing more skills than just learning to answer a written hypothetical. These skills included asking questions, being empathetic, other interpersonal skills, and understanding the client.’
◦ ‘It is helpful in enhancing students’ problem-solving skills by using limited information to determine the next best question to ask the client. I also think it would be a really helpful learning tool for students to use in their own time to practice for exams and to prepare students for real-life work.’
Evidence 1c: Quantitative Feedback from LAWS5000 Survey (n=93)
◦ 60/93 students rated engagement at the highest level (5/5)
◦ 54/93 rated impact on learning motivation at the highest level (5/5)
◦ Key student insight: ‘It was a lot of fun and insightful getting to use AI in a lawyer-client setting, and was extremely helpful in informing me of how being a real-world lawyer will be like’
Evidence 1d: Cross-Faculty
This successful model is now being adapted for implementation across additional law units and is informing the development of similar AI-enhanced experiential learning activities in the Faculty of Science and Engineering’s Learning and Teaching team.
Evidence 2: AI-Enhanced Activities in Political Science (Stolfi, Hawker, Ballsun-Stanton)
This unit also represented a 2023–approach to implementing TEQSA’s Authentic Assessments with AI. Student feedback included: ‘3 units at the end of the semester was well worth it. I strongly recommend that this is continued for the future of the unit. I also recommend the use of AI is continued in the unit as the use of AI allows students to more or less skip all of the time-consuming “grunt” work and instead really focus more on the aspects of good policy-making and demonstrate the extent to which they have understood the content being taught in the unit. Overall, a very unique unit but I very much welcome the variety and unusual assessment style.’ Multiple students reflected that this authentic assessment which encouraged the use of AI confronted them and their learning methods – the normal approaches to ‘just write an essay’ did not apply. This assessment framework also forced students to examine their assumptions about source quality and the factual accuracy of AI-generated text, all the while supporting them in conducting an authentic experiment. ‘[T]he new assessments I took on in S1 2024 were unique and engaging in a very different way to the usual essay/exam structure of POIR units. I feel the new version of the assessments set me up well for my future outside of uni.’
Evidence 3: AI Tools in German Studies (Revink, Ballsun-Stanton, Garde)
This unit was one of the earliest units to adopt TEQSA’s advice on Authentic Assessments as an empowering mechanism for AI use. Students in two years enjoyed the authenticity and simulated intercultural exchange. Moreover, students have reported multiple successful pedagogical outcomes. One student reported that this assessment combined with AI was the only way they were able to break the ‘error carried forward’ problems with their German language understanding. Another student spoke with their family about this portfolio task, and they were extremely positively surprised about a task like this being included in a German (language) class as it is a ”brilliant” way of getting further insights and learn about the (German) culture as well: 'My Aunt thought it was a great assignment. she thought it was good because it: ‘made me more familiar with German culture - cities, food, activities,
history and architecture.’ This semester’s class had an excellent engaging discussion on both sides of the utility of AI in language education, enabled by the development of the assessment and the teaching materials and videos by Revink and Ballsun-Stanton.
Evidence 4: Early Experiments in PICT2020 (Hurley, Ballsun-Stanton)
Of the 176 respondents to take the end-of-class survey, 154 ‘absolutely’ (77) or ‘almost absolutely’ (also 77) felt that they received sufficient guidance on using AI and LLMs in the unit. Selected responses include:
◦ ‘The support and guidance regarding the use of AI, and how LLMs work in general was extremely beneficial and throughout. [Brian’s] guest lecture provided an in-depth explanation of how these AI Chatbots actually work, and how to tailor them to suit your needs. This made me more comfortable in using them throughout the unit, as I had the tools and knowledge to make them work for me.’
◦ ‘I think was a great starting point to introducing AI and I look forward to seeing the possible future integration of AI into the university’s academic work.’
◦ ‘using AI helped me understand which sections i was struggling in and which sections i was good in. I am not good with coming up with arguments and ideas but once I was given examples by the AI, it made it easy for me to expand on the ideas. It made me realise I am decent at proving a point but not good at finding evidence to back that point.’,
◦ ‘I think that the use of AI in the unit is good because even if I never have the chance to use it again as a university student I can then go out into the workforce being confident in my abilities in using it if I am ever required to do so.’
This unit was the pathfinder that helped Ballsun-Stanton and Hurley provide support for the Department of Security Studies on appropriate levels of AI use in the classroom, and, more importantly, to calibrate expectations on how much additional training and lecture time would be needed to empower students to use these tools ethically and effectively in the classroom.
Evidence 5: AI-Driven Research and Assessment in ARTS3500 (Ballsun- Stanton, Laurence, Khalid)
The fundamental success of this unit is that over 10 of 27 undergraduate students volunteered to continue research after the semester ends, including those who are living internationally. Students, in vivas, have presented nuanced plans about how they plan to use LLMs as part of their work, communicate appropriate uses to employers, and how their judgement is a critical component of use. This unit has allowed academics in FoA learn how to teach the praxis and necessary judgement of LLM use for academic work.
Evidence 5a: Student Reflections on ARTS3500 Learning Experience
◦ Pedagogical Impact: ‘Throughout the course of the unit, there have been numerous personal and class wide instances of effective generative AI techniques and practices... my prompting process has drastically evolved since the beginning of the unit, and now mirrors that of an intuitive and well-experienced AI user who comprehends its skills and inadequacies.’
◦ Peer Learning Value: ‘The collaborative sharing of prompts and insights with the peers in my stream... has been invaluable in identifying effective strategies and areas for improvement.’
◦ Future Applications: ‘ARTX3500 has enhanced my understanding of artificial intelligence (AI) and its practical applications in both personal and academic contexts... suggesting it could be valuable as a mandatory module for all tertiary students.’
◦ Professional Development: ‘... growing need to teach students on how they can use AI effectively, while also emphasising it is a tool that should be used alongside their own work. AI is a great tool that is effective when used to boost a piece of work.’
Evidence 5b: Feedback from teachers around the world
When student work from this unit6 was promoted to academics and teachers online, responses included:
◦ ‘Wow this is a very useful resource’ Prof at Rutgers Uni and Macquarie alumnus
◦ ‘Say thanks to students who made this’ President of History Teachers’ Association of NSW
◦ ‘Interesting - and great for [HSC] Extension and use of AI technology too!’
◦ ‘This is absolutely thrilling: sound and good work, really the thing we need for questions that are often answered without any sense for nuance. Send my regards and congratulations to your students too.’ Professor Laes (Manchester)
Evidence 5c: Support from academics
Ray Laurence: ‘My involvement with ARTS3500 is leading to the following outcomes for my teaching and for my practice of research training for Graduate Research Students that will be enacted in 2025:
◦ AHIS1210/AHIX1210 Inclusion of instruction to students for use of LLMs to plan their second assessment. Include instruction in the use of Zotero – so that students are set up for the rest of their degree (ARTS3500 students had missed this and saw its value in hindsight)
◦ AHIS2225/AHIX2225 Instruction to students on use of LLMs to harvest, collate and catalogue ancient historical sources.
◦ Graduate Research Student Workshop/s Use of LLMs for scoping of research topics and formation of annotated bibliographies. Use of LLMs for the development of abstracts for conferences, thesis and publications, and Use of LLMs to plan publications in the context of Journal requirements.
This unit has provided me with knowledge around the effective (and useless) use of LLMs, as well as the confidence to inform students of their potential usages.’
Jodie Torrington: ‘Being involved in the experimental AI ARTS3500 class this semester has been extremely impactful on both my AI knowledge, understanding and efficacy. I have learned so much from Brian; importantly, changing my mindset around AI and the nuances involved in interacting with it. I have been able to translate the knowledge and understanding from Brian into the microcredentials I am developing for the MQ Teachers’ Learning Hub, and also for keynote speaking engagements including: the NSW Council of Deans, the MQ College, and the Sydney College of Divinity conferences. I have also created AI prompting insights, developed from Brian’s wisdom, that I have shared with the Macquarie School of Education staff.’
Evidence 6: Novel Knowledge Framework for Assessing Law Students with AI (Head, Willis)
Evidence 6a: Design Features
◦ Published via open access 19 July 2024 under latest articles. As of 30 October 2024, the paper has been viewed 656 times. Currently 4th most viewed article in the journal for 2024.
◦ Adopted by MQ Law School for best practice in AI-integrated assessment
◦ Assessment design incorporating the Knowledge Framework: Torts research paper on Climate Litigation. Key design features:
▪ It does not prohibit the use of AI but rather provides resources (including a lecture by convenor of the unit) on how to ethically engage with AI in the research process (GenAI System Knowledge, GenAI Ethics Knowledge).
▪ It requires engagement with up-to-date, authoritative sources on the topic – behind a paywall and very recent, so less likely to be in LLM training data (Substantive Legal Knowledge).
▪ It requires students to reflect on their own research process (which has strong a pedagogical basis (see for eg: Sheppick, C. (2024)7. while also making it difficult to rely solely on AI-generated content as the reflection is a check on main submission.
▪ It promotes critical thinking about the role of AI in research and writing and awareness of AI’s strengths and limitations in legal research (GenAI Ethics Knowledge).
Evidence 6b: Feedback
◦ Student reflection: ‘I adopted a dual approach... I would conduct my own research and then used AI as a secondary aid, relying on it for clarification and refinement rather than as a primary source’
◦ Anonymous reviewer: ‘Law teachers everywhere would benefit from the article... With one eye to the professional admission requirements, the authors use a first-year legal ethics course to demonstrate both the strengths and weaknesses of GenAI in terms of student learning in relation to both their substantive knowledge of legal ethics and their understanding of the pros and cons of the technology. Law teachers would be likely to be encouraged not only by the positive experience of learning that students acquired, in regard to both the substantive knowledge of the course and the technology, but also the fact that there was a decline in the percentage of fail grades in the case study... This timely article has the potential to inform law teachers not only of the ethical risks of GenAI, but also its positive value as a teaching tool for prospective lawyers’
◦ Colleagues: ‘Would love to chat with you and/Amanda at some point later in the session ... about advice for possibly rejigging second media law assessment. We cover AI in the media landscape so it could be a nice tie in.’
Evidence 7: LLM Workshops, Department Seminars, Professional Devel- opment (Ballsun-Stanton)
◦ Impact on Teaching Practice: ‘Brian's fortnightly AI group has given me a much deeper understanding of how AI may be used in supporting my own practice as a learning designer... Based on what I have learnt in this group, I have run a workshop for my own team on using Whisper AI for transcription.’ (Karen Woo, FSE L&T)
◦ On a LLM intro workshop: ‘Thank you so much for your efforts in making yesterday’s training happen. I feel much more confident trialling use of AI now. I got a lot out of it. [Thanks for] these further resources as I will do more exploring.’
◦ Feedback from staff in the chancellery on a policy workshop: ‘ Thanks for today’s session Brian, very informative.’ and, from a University Executive, ‘Thank you for your expertise, insights and frankness today Brian - that was an excellent session.’
◦ From the University of Tasmania: ‘We have had an overwhelming response to your events, which we have opened up to a wide range of disciplines (social sciences, medicine, engineering, education etc). ’
◦ Impact on AI understanding: ‘he has changed the way I think about AI and how it works by correcting the widespread use of “hallucinates” when it comes to AI (“Only humans hallucinate. AI confabulates”). Now I am always silently correcting people who talk about AI “hallucinations” ’
◦ Feedback on an infographic created with Jodie Torrington: ‘…we’ve received lots of very positive feedback about how helpful it was, and it was a great launching pad for our ongoing conversations around AI.’ (Michael Cavanagh) and ‘It will be a great help when using AI and I’d like to share it with my students. ’ (Constantine R. Campbell, Sydney College of Divinity)
1 This application was edited with the help of Claude 3.5 Sonnet.
2 LaTeX Word Count, 950 words, excluding headers (73 words)
3 A/Prof Ulrike Garde is presently at the University of Weimar, included due to her leadership at the time of unit creation.
4 Applicants in alphabetical order.
5 Head, A., & Willis, S. (2024). Assessing law students in a GenAI world to create knowledgeable future lawyers. International Journal of the Legal Profession, 1–18. https://doi.org/10.1080/09695958.2024.2379785
6 Green, G., Rodgers, J., & Tainsh, C. (2024). Caligula’s Madness, an Annotated Bibliography: 1856–2024 [Data set]. Zenodo. https://doi.org/10.5281/zenodo.13999404.
7 Sheppick, C. (2024). Unveiling the benefits of reflective learning in professional legal practice. International Journal of the Legal Profession, 31(2), 207–221. https://doi.org/10.1080/09695958.2024.2345924.
Awarded date | 4 Dec 2024 |
---|---|
Degree of recognition | MQ Teaching Award |