Institut für Kognitionswissenschaft

Institute of Cognitive Science


Osnabrück University navigation and search


Main content

Top content

News

19.12.2024
Chaos Communication Congress 38c3: Participating with Two Presentations!

See you in Hamburg! We will be presenting the latest from our research in these two talks (in German):

  • 28.12.2024, 12:00 PM: Gemeinwohlorientierte Forschung mit KI: Missbrauch eindämmen durch Zweckbindung für KI-Modelle (Rainer Mühlhoff and Hannah Ruschemeier).
  • 29.12.2024, 11:00 AM: Chatbots im Schulunterricht!? Was können die Tools wirklich, was machen sie mit der "Bildung", und sollten wir dafür Steuergelder ausgeben? (Marte Henningsen and Rainer Mühlhoff).

All presentations will be live-streamed and recorded.

12.12.2024
Study about the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz (in German)

This study examines the AI-powered grading tool “AI Grading Assistant” by the German company Fobizz, designed to support teachers in evaluating and providing feedback on student assignments. Against the societal backdrop of an overburdened education system and rising expectations for artificial intelligence as a solution to these challenges, the investigation evaluates the tool's functional suitability through two test series. The results reveal significant shortcomings: The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated. The highest ratings are achievable only with texts generated by ChatGPT. False claims
and nonsensical submissions frequently go undetected, while the implementation of some grading criteria is unreliable and opaque. Since these deficiencies stem from the inherent limitations of large language models (LLMs), fundamental improvements to this or similar tools are not immediately foreseeable. The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education, concluding that Fobizz’s marketing of the tool as an objective and time-saving solution is misleading and irresponsible. Finally, the study calls for systematic evaluation and subject-specific pedagogical scrutiny of the use of AI tools in educational contexts.

01.10.2024
Symposium “AI x Culture” in Hanover, October 01

Next week, on October 1, our colleague Annemarie Witschas will lead a workshop as part of the “KI X Kultur” event in Hanover. We are already looking forward to exciting discussions with representatives of art education and the cultural scene about how we can promote a more comprehensive - and thus more responsible - image of AI in public discourse.

For more information, please follow the link in the photo (in German).

 

 

24.08.2024
Another tech is possible
Critical and speculative digital futures

From flying cars to virtual cyber worlds: A future without smart digital technologies seems almost unimaginable today. But whose future is this, really? While technological futures often present themselves as revolutionary, they simultaneously reinforce old notions, existing power structures, and ways of life. Where do these visions of the future come from, and whose intentions are behind them?

These questions will be critically, visionary, and playfully discussed in a workshop on August 24, 2024, from 11:00 AM to 5:00 PM at the Kunsthalle. Together, we will explore what alternative, sustainable, and socially just technological futures could look like, inspired by methods from critical future studies and design futuring, as well as feminist science fiction.

The workshop will be held in German. There will be breaks with snacks and drinks. Participation is free of charge. Please register in advance by emailing outreachlab@ethikderki.de.

 

30.07.2024
Chatting with artificial intelligence - risks when dealing with chatbots

© Reinhardt Hardtke / HNF- Nora Lindemann talking about human relationship to AI

What are the problems, risks and opportunities when people interact with chatbots? What do such relationships look like? And what ethical and social issues are associated with them? These are questions that Nora Lindemann is addressing in her doctoral thesis and which are highly relevant in the age of ChatGPT and the like. The University of Osnabrück has now published a press release on the subject of this work, which provides a good first insight into the topic. (in German only)

 

 

April 22, 2024
New article in AI & Society: "Chatbots, Search Engines, and the Sealing of Knowledges"

Abstract: In 2023, online search engine provider Microsoft integrated a language model that provides direct answers to search queries into its search engine Bing. Shortly afterwards, Google also introduced a similar feature to its search engine with the launch of Google Gemini. This introduction of direct answers to search queries signals an important and significant change in online search. This article explores the implications of this new search paradigm.
Drawing on Donna Haraway’s theory of Situated Knowledges and Rainer Mühlhoff’s concept of Sealed Surfaces, I introduce the term Sealed Knowledges to draw attention to the increasingly difficult access to the plurality of potential answers to search queries through the output of a singular, authoritative, and plausible text paragraph. I argue that the integration of language models for the provision of direct answers into search engines is based on a de-situated and disembodied understanding of knowledge and affects the subjectivities of its users. At the same time, the sealing of knowledges can lead to an increasing spread of misinformation and may make marginalized knowledge increasingly difficult to find. The paper concludes with an outlook on how to resist the increasing sealing of knowledges in online search.

Mar 15, 2024
Purpose Limitation for AI Models

We are excited to announce the publication of our paper, “Regulating AI with Purpose Limitation for Models” by Rainer Mühlhoff and Hannah Ruschemeier, featured in the opening issue of the new journal “AI Law and Regulation”. In this groundbreaking study, we introduce the concept of applying purpose limitation to AI models as a novel approach to mitigate the unregulated secondary use of AI, addressing risks such as discrimination and infringement of fundamental rights.

Our interdisciplinary research highlights the increasing informational power asymmetry between data companies and society, suggesting that current regulations, focused mainly on data protection, fall short in curbing the misuse of trained models. By shifting the focus from training data to trained models, we advocate for a regulatory framework that emphasizes democratic control over AI’s predictive and generative capabilities, ensuring they are used in ways that are beneficial to society without undermining individual or collective rights.

This paper is a call to action for lawmakers, technologists, and the public to rethink how we regulate AI, aiming for a future where AI serves the public good while respecting privacy and equity. Dive into our full analysis and join the conversation on how we can achieve a more equitable and controlled use of AI technologies.

An earlier version of the paper was presented at the EAI CAIP – AI for People conference on November 24, 2023, in Bologna.

Feb 19, 2024
Predictive Analytics and Collective Data Protection

We’re excited to announce the final publication of our paper, “Predictive Analytics and the Collective Dimensions of Data Protection”, by Rainer Mühlhoff and Hannah Ruschemeier. Our interdisciplinary research blends legal studies, ethics, and technical insights to unravel the social implications of predictive analytics. Challenging the conventional, individualistic approach, we propose ‘predictive privacy,’ advocating for regulations that reflect the collective impact of data use. Dive into our exploration of societal risks and the imperative for new legal frameworks in the digital age.

 

 

 

Feb 5, 2024
NDR Kultur – Das Journal
on my work

Under the title “Was sind die Risiken von KI?”, NDR Kultur – Das Journal made a report on the current work of Prof. Mühlhoff. Many thanks to Lennart Herberhold (in German).
 

More interesting articles:

Dec 28, 2023 - Chaos Communicaton Congress Hamburg - video AI – Power – Inequality (in German)

Oct 17, 2023 - Public Philosophy journal - An Agenda For Change

April 17, 2023 - Big Data & Society (open access) - “Predictive Privacy: Collective Data Protection in the Context of AI and Big Data”

March 22, 2023 - “Social Media Advertising for Clinical Studies: Ethical and Data Protection Implications of Online Targeting” (mit Theresa Willem)

Oct 12, 2022 - Prädiktive Privatheit: Kollektiver Datenschutz im Kontext von Big Data und KI (in German)

Sept 8, 2022 - Deutschlandfunk Nova – Broadcast Lecture Hall (in German)

Aug 29, 2022 - Live on radio WDR5 – The Philosophical Radio (in German)