The ability to use AI tools competently is already part of the professional qualification. Knowing how the technology works not only helps in assessing the possibilities, but also in routinely critically questioning and classifying information. This page offers you an introduction to these aspects. However, the use of AI tools does not replace one's own thinking, but is an aid – and there are rules for using it in studies and assessments.
The FAQs provided here are an excerpt from our guide "AI systems in studies: Tips on technology and law for use at LUH". They are neither comprehensive nor legally binding. In specific individual cases, a legal examination is recommended.
As of: September 2024
FAQ
Basics
-
What are AI tools?
The abbreviation AI stands for artificial intelligence (AI) and refers to automated processes and technologies that efficiently solve complex problems that at least seemingly require human intelligence. Technically, this is based on AI representing rule-based and statistical knowledge. This knowledge has been coded by experts or trained from often enormous amounts of data.
The represented knowledge is used in automated processes to make decisions, calculate values, generate or modify data, etc. based on input data. These processes are implemented in software technologies that can be used by humans as a tool for their own input data.
Recently, AI tools that use so-called Large Language Models (LLMs) and other generative AI processes have become particularly widespread. They can solve tasks, create or optimise new texts, analyse or evaluate existing texts, and generate images, videos and audio based on textual input (often called prompts). Well-known examples are ChatGPT, Microsoft Copilot, Google Gemini, Github Copilot, Dall-e, Stable Diffusion and Midjourney. New AI tools are regularly added, making it difficult to provide a complete list. What's more, existing AI tools are constantly being developed.
At this point, we refer to text-generating AI tools. You can find more examples of this and other areas of application for AI tools on the website https://www.tib.eu/en/learning-working/schreibbar.
-
How do large language models like ChatGPT work?
Large Language Models (LLM) such as ChatGPT generate output text for an input text (the prompt). This means that an LLM does not find answers or solutions on the internet like search engines do, nor does it copy text from other sources. Instead, it creates text from scratch for each prompt. Today's LLMs are often designed to understand the prompt as a question or instruction and to respond accordingly.
-
How is the text generated?
LLMs like ChatGPT have learned independently, based on billions of texts, which words can best be written in a given context. When you enter the prompt, an LLM can calculate which word is most likely to come after the input – and so it continues word by word. The LLM takes into account not only the immediate context of the last words added, but also the entire text history so far.
-
Why do the generated texts often sound sensible?
When processing billions of texts, an LLM implicitly also memorises the knowledge contained in the texts and links knowledge with each other – implicitly because the knowledge is stored independently of its specific formulation. When the prompt is entered, all knowledge that matches the content of the prompt is automatically used. In addition, example dialogues were used to demonstrate what an answer to a prompt should look like.
-
How much can we trust the generated texts?
The information contained in the generated texts is often accurate, especially for common content, but you should never trust it: LLMs do not check the correctness of their own texts, and in particular, LLMs have no concept of ‘truth’. Due to the word-by-word generation, they may also invent information, which happens more often with more demanding content. In addition, they reflect all the representations and views of the world that can be found in the processed texts, even if these reflect half-truths, prejudices and outdated world views. These distortions (in English: bias) are not always easy to recognise.
Legal aspects
Unfortunately, we cannot offer a "catalogue" with yes/no answers to what is and is not legally permissible at this point. The answer to "What is possible, what is not?" can usually only be worked out on a case-by-case basis.
Our handout Text-generating AI: Legal aspects when using at LUH provides answers to general topics such as copyright and intellectual property. The following FAQ refers to the context of specific courses.
-
What do I ask my teachers?
In principle, your work must represent an independent achievement. If you want to use a tool, clarify a few questions with your teachers in advance:
- Is the use of AI tools allowed?
- Are certain tasks excluded?
- For which application scenarios are AI tools allowed?
- What applies when creating study and assessment work?
- Can I have texts generated and adopted?
- If it is allowed: What (and how) should be documented?
-
What is good to know?
- Am I entitled to guidelines on how the use of AI can be disclosed?
There is no obligation on the part of the teachers to create information sheets or guidelines. Ask about the dos and don'ts at the beginning.
- Am I entitled to training?
There is no entitlement, but there are already good offers such as the courses offered by the TIB (https://www.tib.eu/en/learning-working/schreibbar) and the ZQS/Key Competencies (https://www.zqs.uni-hannover.de/en/kc).
- How can my works be handled?
Your works are protected by copyright (your work may not be uploaded to an AI tool or used by third parties, even in part, without your consent). Your texts must also be evaluated individually.
- How can I use AI tools?
At LUH, LUHKI and the Academic Cloud are available for this purpose. You cannot be made to use a tool where you have to disclose personal data.
- Am I entitled to guidelines on how the use of AI can be disclosed?
-
What do I need to be aware of?
As an author, you are responsible for both the use of the AI tools and the work you submit. Do not enter third-party content and data for further processing without authorisation. Pay attention to the terms of use of the tools themselves, as these may change and may be different depending on the licence. For example, if you are eligible to apply via the Academic Cloud, you will receive information on which AI tools are legally compliant with the General Data Protection Regulation (GDPR).
The handout Text-generating AI: Legal aspects when using AI at LUH contains a lot of important information on what you should be aware of when using text-generating AI tools. A special point is unintentional plagiarism - when an AI tool gives you a text as a result that is close to a protected text.
A final thought on electricity and water consumption: the IT infrastructure and computing requirements for AI tools are significantly higher than for pure internet search engines. Develop a critical awareness of when you use which tool for your tasks.
-
What is not acceptable?
Just as the Teaching Constitution emphasises that students develop their personalities, so too do students work in a scientifically correct manner. This follows a code of transparent research and publication according to agreed rules. Practising this is the aim of many courses and assessments. Those who do not adhere to it damage their own reputation.
Three NOs:
- Do not hide if and how you use AI tools. Find out whether the use of AI tools is allowed and document what you do with which tool.
- Do not violate the rights of others, e.g. by entering third-party data without authorisation. This also includes the unauthorised upload of lecture material, etc., if this is used by the provider for further training of the AI.
- Do not use the AI tools to create counterfeits (plagiarism) or to harm others (deep fakes).
- Do not hide if and how you use AI tools. Find out whether the use of AI tools is allowed and document what you do with which tool.
-
Do these guidelines apply to everyone?
Depending on the discipline or subject area, these guidelines can be added to or limited. Therefore, our advice: Discuss technologies with teachers before using them for assignments or assessments!
Authors
-
Who compiled the guidelines?
Compiled by the AI in Teaching working group with the participation of Melanie Bartell (Dez. 2 / SG23), Sylvia Feil (ZQS/elsa), Kati Koch (TIB), Jens Krey (Dez. 1 / SG11), Prof. Dr. Marius Lindauer (Institute of Artificial Intelligence), Dr. Katja Politt (German Department), Dr. Inske Preißler (Faculty of Electrical Engineering and Computer Science), Dr. Klaus Schwienhorst (Leibniz Language Center), Felix Schroeder (ZQS/elsa), Prof. Dr. Henning Wachsmuth (Institute of Artificial Intelligence).
Handout
Further information
Contact
30167 Hannover
30167 Hannover