Registration
Image
You can log in via the AcademicCloud or directly via this link simply using SSO.
Image
Click on "Federated application" and select "TU Bergakademie Freiberg".
Image
The Shibboleth query follows. Enter your central user data here.
You can now chat with the generative AI.
Interface and operation
Image
The interface is intuitively designed so that you can navigate easily and use the chatbot efficiently.
You have the choice and can chat with the AI via message, text file or voice input.
Image
To the right of the chat window you can choose between the different AI models:
- Meta Llama 3.1 8B Instuct
- InternVL2 8B
- Meta Llama 3.3 70B Instruct
- Meta Llama 3.1 70B Instruct
- Llama 3.1 Nemotron 70B Instruct
- Llama 3.1 SauerkrautLM 70B Instruct
- Mistral Large Instruct
- Qwen 2.5 72B Instruct
- Codestral 22B
- Teuken 7B Instruct Research
Below you have more options to customise your workflow, such as the system prompt or temperature for creativity and randomness of the model).
Available models and their areas of application
Source: https://docs.hpc.gwdg.de/services/chat-ai/models/index.html (as of 09.01.2025)
Model name | Developer | Operating system | Knowledge shutdown | Context window | Benefits | Possible configuration for benefits |
---|---|---|---|---|---|---|
Llama 3.1 8B Instruct | Meta | yes | December 2023 | 128,000 tokens | Fastest general use | Standard (Temp: 0.5; top_P: 0.5) |
Llama 3.1 70B Instruct | Meta | yes | December 2023 | 128.000 Tokens | Great overall performance Multilingual thinking and creative writing | Standard Temp: 0.7; top_P: 0.8 |
Llama 3.1 SauerkrautLM 70B Instruct | VAGOsolutions x Meta | yes | December 2023 | 128.000 tokens | German total usage | Standard |
Llama 3.1 Nemotron 70B Instruct | NVIDIA x Meta | yes | December 2023 | 128.000 tokens | Total improvements compared to Llama 3.1 70B | Standard |
Mistral Large Instruct | Mistral | yes | July 2024 | 128.000 tokens | Great overall performance Coding and multilingual thinking | Standard |
Codestral 22B Instruct | Mistral | yes | Late 2021 | 33.000 tokens | Write, edit, repair and comment code Exploratory programming | Temp: 0.2; top_P: 0.1 Temp: 0.6; top_P: 0.7 |
E5 Mistral 7B Instruct | Infloat x Mistral | yes | - - | 4096 tokens | Embeddings, API only | - - |
Qwen 2.5 72B Instruct | Alibaba Cloud | yes | September 2024 | 128,000 tokens | Global affairs, Chinese, general use. Mathematics, Logic | Standard Temp: 0.2; top_P: 0.1 |
Qwen 2 VL 72B Instruct | Alibaba Cloud | yes | June 2023 | 32.000 tokens | VLM, Chinese total usage | Standard |
InternVL2 8B | OpenGVLab | yes | September 2021 | 32.000 tokens | VLM, small and fast | Standard |
ChatGPT 3.5 | OpenAI | No | September 2021 | 16.000 tokens | Large computer | Standard |
ChatGPT 4 | OpenAI | No | September 2021 | 8.000 tokens | Large computer | Standard |
ChatGPT 4o-Mini | OpenAI | No | September 2021 | 128,000 tokens | VLM, low-cost | Standard |