Creating OER with the help of “AI”

The level of acceptance of “AI” – by which, in line with common usage, I mainly mean chat-based Large Language Models (LLMs) such as ChatGPT, Perplexity, Gemini or DeepL – is certainly mixed in my personal circle.
The fascination with what the large “AI” models in particular are capable of is counterbalanced by a healthy scepticism regarding errors (“hallucinations”) and the lack of reproducibility of results. Added to this, there are ethical and environmental concerns.

And, of course, one has to face the legitimate question: if you can’t be bothered to write it yourself, why should I bother to read it?
The fact is, however, that the technology is here and it’s unlikely to go away. It does not befit a person working at a Data Competency Centre to ignore or boycott the latest technological developments on principle. I’ll therefore take the opportunity to explain how and why I’ve used ‘AI’ as a tool in the creation of our self-study courses.

(As always, I can only speak for myself here – it is quite possible that my colleagues have weighed up the options, made different decisions, and have taken a different approach.)

A toy brick minifigure of our persona ‘Winnie’ stands beneath a sign featuring a magnifying glass and a question mark. Next to it are the logos of popular AI tools: DeepL, ChatGPT, DuckAI, Perplexity, Gemini and Midjourney. The background is WiNoDa green.

Firstly…

Of the freely available LLM chatbots, I tried out the following models: ChatGPT, Perplexity and Duck.ai. In the vast majority of cases, I used Perplexity because it is the most transparent about the sources from which it derives its answers. In some cases, however, I also pitted two models against each other – either by comparing the results of the same prompt, or by submitting one model’s response to another for feedback. This served primarily to compare the strengths and weaknesses of the models and to develop a sense of the results to be expected.

Mareike König has written a helpful tutorial (in German or French) on the use of LLMs in the digital humanities (including a comprehensive overview of the pros and cons and some useful tips on ‘prompt engineering’), which was easily adaptable to my intended use.
Furthermore, we have agreed internally on the purposes for which we wish to use “AI” and how we wish to label its use in our working materials. With the help of the University of Bamberg’s AI Policy Generator, we have settled on the following wording:

“We may use AI tools to assist with some or all of the following tasks as needed:

  • Planning course structure and content.
  • Creation and revision of teaching materials (e.g. slides, summaries, visualisations).
  • Generation of exercises, quizzes or case studies.
  • Optimisation of tasks or descriptions.
  • Translations 

All content created or edited in this way is carefully checked and selected by us.”

I’ll just go through the points one by one:

1. Planning course structure and content

As a rule, everyone involved in creating a course worked together to develop the topics for the course modules and underpin them with the relevant learning objectives. By that point, I already had a fairly clear idea of the content and structure.
I used the LLMs merely as a thought-provoking sparring partner – is the planned structure appropriate for the target audience? Does it cater to the right level of complexity or difficulty? Is anything missing? It was helpful when working alone from home – but this role could just as easily have been taken on by colleagues.

2. Creation and revision of teaching materials (e.g. slides, summaries, visualisations)

First things first: I have a personal aversion to ‘AI’-generated images. I don’t like them; I find them mostly uncreative, banal, ugly (and yes, of course, sometimes flawed too). So far, I had just one image generated for a blog post and I’m still ashamed of it – as soon as I find the time, I’ll replace it.
Therefore: I haven’t used any “AI”-generated illustrations myself.

I also created the design and layout of the slides myself – for me, creating slides is part of the revision process, and I don’t delegate that. Plus: as mentioned above, ‘AI’-generated slides aren’t any better looking than the ones I create myself; they’re just a different kind of ugly. Since I’m ultimately responsible in any case, I’d rather take the blame for my own mistakes than for someone else’s.

The situation is different with ‘AI’-generated summaries: however, they work best when the source text follows a specific structure influenced by American writing conventions – and that isn’t necessarily ideal for German-language texts.
In any case, the generated summary must be thoroughly checked and often revised several times – this can be helpful, but it can also end up being more work than just doing the summary yourself. I decided on a day-to-day basis.

The same applies to the creation of teaching materials – in this case, subject-specific content scripts.
I know people who find themselves paralysed by a blank sheet of paper – and who ask the LLM to provide a first draft. These people find it easier to correct or edit a text than to start from scratch. So anyone who prefers revising to writing might want to proceed in this way. However, I find editing texts exercises a completely different ‘writing muscle’ to writing them.

Personally, I’d rather write the text myself than give feedback to an ‘AI’ – especially since the LLM doesn’t ‘learn’ anything from my ‘feedback’, so my effort wouldn’t be an investment that pays off in the long run.
What’s more, by the time I’d have created a prompt that took into account the necessary context, the desired level of technical depth, and the connection to preceding or subsequent course elements, I would have already written the structure of the text myself and would “only” need to flesh it out.

I would also have completed the basic research on the subject matter long before this point, as a solid grounding in the topic is required to evaluate texts generated by ‘AI’ and check them for errors, omissions and ‘hallucinations’ – and yes, a thorough check is essential in every case. So I haven’t used ChatGPT and the like for creating teaching materials either – not in the sense of: “here’s a learning objective, write me a text”.

BUT: I found chatbots very helpful as a tool for reviewing phrasing, text flow or gaps. I regularly asked LLMs for feedback on the structure of a text or on a first (or second) draft of a script or section of text.
(When it comes to the professional revision of content, however, I still tend to rely on peer review by a human colleague – I cannot and will not do without that.)

My revision prompts included information on the context (a self-study course on topic X), the content (the learning objective as per the learning objective matrix, including Bloom’s taxonomy level) and, of course, the draft text.
In addition, I asked the chatbot to evaluate the draft in a structured manner – for example, in terms of appropriateness, completeness/gaps, or with regard to linguistic flow or structure. If the chatbot’s feedback required significant changes, I also asked for suggestions for improvement.

3. Generation of exercises, quizzes or case studies

This is probably my most important use case.
For a lot of content, I asked for practical examples based on our personas, for instance. And I had quiz questions created to match each learning objective script. It’s easy to write up quiz questions with the correct answers, especially for single- or multiple-choice questions – but the wrong answers, which you need as distractors, are tedious – this is where ‘AI’ really shines.

Another handy feature is that you can have the quiz questions output immediately in ‘H5P code’ format, so you can copy them straight into the H5P editor. (The Medienberatung Niedersachsen, in a series of video tutorials, shows how this is done.) This saves a lot of time and effort.
Nevertheless, it is of course always necessary to check carefully whether the practical examples are realistic, whether the quiz questions can actually be answered based on the text, and whether the solutions are truly correct.

4. Optimisation of tasks or descriptions

This probably falls within the scope of the feedback described above regarding text flow, suitability for the intended level of detail, and the clarity and conciseness of tasks and descriptions, which were my main use-case.

5. Translations

I used DeepL for all the translations. Depending on whether I had written my script in German and then translated it, or had written it in English from the outset, I either had whole paragraphs translated or simply looked up individual phrases.
But here too, of course, I didn’t simply accept the output at face value; instead, I checked it, occasionally revised it, or chose a different phrasing to the one initially suggested. This is particularly important when it comes to technical terminology.

Summary

Chat-based Large Language Models (LLMs) are not a reliable research tool. The quality of the results is still not consistent enough for that – sources may be misjudged or insufficiently contextualised, use cases may be ‘hallucinated’ and references may be ‘fabricated’. Any output must always be carefully checked.
However, they can be a helpful aid for revising work or for overcoming writer’s block. They excel at generating quiz questions that require not just one correct answer but several incorrect ones as well.
And they are a great help when drafting texts in a foreign language, particularly when these are not required to adhere to the conventions of academic German (which is hard and unadvisable to translate).

Concerns

Skills atrophy – the loss of skills due to the use of ‘AI’ – is real.
I’ve noticed this in myself: whereas 10 or 20 years ago I could hold technical discussions fluently in English, I now increasingly rely on online translation tools. My passive language skills are still there – I can still understand everything, check it, and choose the better translation. But my active language skills are atrophying – I simply no longer spend time in English-speaking countries or meet native English speakers. This phenomenon has long been recognised, particularly when it comes to languages.

The atrophy of underused skills is therefore nothing new – but with the widespread emergence of ‘AI’ chatbots, the concept is increasingly being applied to other cognitive areas: those who increasingly rely on ‘AI’ to generate texts, code or concepts can, of course, still evaluate and adapt the results. But the cognitive task of revision differs from that of creation. It is important to be aware of this.
Also, what is quicker in the short term may prove more time-consuming in the long run – having to carefully correct everything can take longer than simply doing it ‘right’ yourself from the outset.

Furthermore, we shouldn’t assume that using these models will remain so cheap – making our workflows dependent on free access to chatbots could soon prove costly for us financially. From an environmental perspective, it is disastrous in any case how data centres are undermining the energy transition.
As for the ethical objections: these are numerous, well-founded and have been known for a long time. Nevertheless, I use LLM as described here. Unfortunately, my moral backbone is apparently more flexible than I thought. I’ll probably have to revisit this issue.

Overall, I have very mixed feelings about using ‘AI’ tools. I recognise the huge potential and can see specific, useful applications, but I’m wary of the hype and the promise of ‘simple’ solutions. I don’t believe in the business model and I’m appalled by the ethical and environmental consequences. I don’t want to delegate either thinking or teaching (or learning) to a machine, and I’m glad that I would still be able to do my work without “AI” support. Even if it might take a little longer in some places.
Yet, I’m curious to see how the field develops.

- AI policy generator from the University of Bamberg: https://web.psi.uni-bamberg.de/ki-policy-generator/v2.html
- Playlist from the Lower Saxony Media Advisory Service: https://youtube.com/playlist?list=PLwKiLhXWbZzSDjqmvLj0t_Dnauk11bPlB&si=jY4mgZXl0V-NtDbH
- Tutorial by Mareike König on the use of LLM in the digital humanities:  https://dhdhi.hypotheses.org/9197
- One of many articles on skill atrophy: https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them

Unless otherwise stated, all content is published under cc-by 4.0. Suggested citation:
Schröder, Asta von. (2026). Creating OER with the help of “AI”. WiNoDa Knowledge Lab. https://winoda.de/en/2026/05/08/creating-oer-with-the-help-of-ai/ (Accessed on May 8, 2026 at 17:12)
Scroll to Top