How can a Cognitive Science Degree shape your Future? Stories about career prospects from successful alumni of the Institute.
Nora Lindemann is a research assistant and PhD student in the Ethics of AI research group at the Institute of Cognitive Science, University of Osnabrück. With a strong interdisciplinary background, holding an M.Sc. in Cognitive Science and a B.A. in Liberal Arts and Sciences, she brings expertise in philosophy, ethics, gender studies, and cultural studies to her work. Nora’s research focuses on the ethical implications of algorithmic decision-making and AI systems, particularly chatbots. She critically examines the power dynamics embedded in these technologies, exploring their structural impact on society to develop a power-aware, critical ethics of chatbots and other AI systems. Additionally, she investigates broader questions surrounding the potential for AI to be democratic and fair.
Thank you very much for being here today. The interview will be divided into a personal and a scientific part. We would like to start with the personal part. Can you please introduce yourself and tell us what you are currently doing?
Yes, of course. First, thank you for inviting me. It’s a pleasure. My name is Nora Lindemann and I’m a PhD student and research assistant in the Ethics and Critical Theories of AI research group at the IKW. In my PhD thesis I work on the ethical implications of chatbots and large language models.
When did you start your PhD here?
I started my PhD in autumn 2021, so It’s already been two and a half years. […] crazy!
Haha yeah, times flies. We would like to know what studying and work led you to your current place. How come you started your PhD here in Osnabrück?
I did my bachelor’s degree at the University of Freiburg. In the Bachelor of Art I focused on philosophy, gender studies and a bit of cultural studies. The degree is called “Liberal Arts and Sciences”, which is a very broad and interdisciplinary bachelor. Nearing the end of it, […] I got quite interested in neuroscience, specifically in gender/feminist neuroscience – research that deals with questions like “What are the differences between male and female brains?”, “[…]How do we conceptualize male and female brain differences?”.
That led to my decision to do a CogSci Master here in Osnabrück to focus more on neuroscience without leaving my interest in philosophy behind. Being here, […] I got in touch with the topic of AI. I did a study project on data ethics, which was quite interesting and convinced me of the topic’s relevance and urgency. Currently there’s so much advancement in AI, and it is important to constantly question what we want to do with these powerful systems.
Would you say that you encountered a research gap in your critical investigation of AI?
Yes. I left the data ethics study project with a feeling that this is a pressing topic and an important research field that I want to work in. I then wrote my master’s thesis about the ethical implications of ‘Deathbots’. Deathbots are AI chatbots which imitate the reading, writing and speaking behavior of one specific person who is deceased. Back then, I realized that there were very, very little published articles that investigate what it does with the bereaved from an ethical point of view. There was some stuff published about questions like “How do deathbots impact the dignity of the deceased person they are mimicking?”. But there was little on what it does to the people who are still living and engaging with these bots. Here, I think, I did find a worthwhile research gap. Additionally, my master thesis led me to question AI chatbots on a broader scale. It has been only two and a half years, but so much has happened since then. With ChatGPT a huge hype broke loose!What would you do differently in your studies?
Were you planning to go into research and academia from the beginning?
I wouldn’t say that I planned to go into academia when I started my bachelor’s. I was focused to find out what am I interested in, though. I asked myself “Where do I feel that I can really learn something?” Looking back, thinking about and engaging with topics that I find interesting and important led to my career ending up the way it is now. I enjoy both researching and teaching a lot Also, I did most of my master’s during COVID, so when I finished it, I felt like I missed out on much of university live. This combination of factors gave me this idea of potentially doing a PhD. And when the opportunity came up to do it here in Osnabrück, I couldn’t say no.
Since we are already looking back in time: Is there something that you would do differently in your studies if you could start again? And opposite to that, what about your studies did you get exactly right?
Hmm, I find this to be a tricky question because I don’t know where I would be with different choices. Looking back at it, I think I could have taken some more time for doing my master’s degree – do it in a less focused manner. After COVID hit, I basically just stayed home for one and a half years, working through the needed credits to finish the master’s. Looking back at this situation I would recommend myself to look outside of my main area of interest. I could have taken more classes from other departments of the university or could have done some more activities outside of the university. For example, during my bachelor’s I did more voluntary work and engaged more with student initiatives than during my master’s.
Regarding your master’s, what were your two majors? And related to that, what are the essential skills that you acquired during your studies in cognitive science that turned out to be helpful during your current job?
I did philosophy and neuroscience and then I did my study project in data ethics. What I learned during my studies is the value of interdisciplinary work. As a CogSci master student you sit in classrooms with people who have very different specializations and very different bachelor’s degrees. I came from this broad humanities background and now know people doing informatics or psychology. It was really interesting to see how we approach topics differently because of the pre-knowledge we had. I think that is something that I took away: the value of interdisciplinary work.
If you could conduct any research project in cognitive science with unlimited resources what would it be and why?
I think that that’s a really good question and also a very hard one. in the end I am a philosopher and an ethicist. At the moment there’s just so much happening on the good and the bad [sides] of digitalization and the gray area between them, that’s worthy of investigation. So I’d try to answer the question “How can society develop an adequately critical attitude towards certain, newer technologies?”. I know that is a very vague answer, sorry! Other worthwhile topics for me would be fake news and what to do against it and the rising social media addiction.
Is Ethics of AI supported by the government or other institutions? Do you have a lot of funding and are there full-time PhD positions?
Never ask a PhD student about their salary. No, there are full-time PhD positions, but they’re extremely rare, at least in Germany. If you look in Germany, you would find less than 10 PhD students having full-time positions within the Ethics of AI. Most people have like 50, 65, 75% positions while basically doing a full-time job, because that is what’s expected in the end. So yeah, the salary isn’t great when you’re doing a PhD. For me it’s more about doing something that I am really interested in. When it comes to career paths academia is usually [a difficult place to be in]. There are certainly disciplines which have more secure careers. For example, in computer science people always have a very good plan B. If you leave academia as a computer scientist and enter the industry, you are usually paid well. And therefore, it is easier to get positions within academia because most people [go straight into] the industry. But in philosophy it’s a different story. Additionally, there are way more people doing a PhD than people who will in the end get a professorship, because there are less professorship than PhD positions. Therefore, it’s highly competitive.
What challenges might non-male genders face when doing a career in science and academia?
It’s always a little discipline specific. But there are certain difficulties along the way. One of them is socialization and the self-confidence [that comes with it]. Do you trust yourself that you can do a PhD? Do you even apply for it? How confidently do you present yourself at conferences or to your supervisor? All those things are important for your career. I notice that when it comes to going to conferences. I tend to look younger than I am and with that, I tend to be not taken as full as maybe my male colleague who is also doing a PhD. You also get a lot of “mansplaining”. Unfortunately that is also a thing in academia. I was actually talking with one female professor a while ago and she said that she still gets asked sometimes who her supervisor is. However, she’s a full professor and is herself supervising several PhD students. She probably wouldn’t have been asked that if she was male. How people judge you changes with your perceived gender.
Another important factor would be having children. When you finish your PhD, you are usually in the beginning or middle of your 30s in Germany. And that is when your critical career phase starts, because then you take on postdoc positions. These postdoc positions then qualify you for a professorship. But the problem is that those are the years when you will have to take time off if you want to have children. And that is the time where people who bear children often drop out of academia. Because if you have children you want to have some stability and some job security. However, during your postdoc phase it’s quite normal that you only get short-term contracts from one to three years. This means that you also have to move every one to three years, which is rather difficult, especially if you are raising children at that time and have to do care work.
Okay, this seems like a big factor contributing to the underrepresentation of women in academia. Is there anything else, maybe some advice or concluding thought that you want to share before we get into the scientific part of the interview?
I’d say don’t let yourself get discouraged from what you are enthusiastic about. Trust in yourself and in the thing you love doing. But also feel free to spread your interests and attention, experiment if you feel like it. Oh, and also don’t do a PhD just because you somehow feel you need to do one. I think you would end up having a horrible time.
Okay great, thanks for all that! Now let’s get to the scientific part of today’s interview. We would like to discuss your paper “Chatbots, search engines and the sealing of knowledges”, which you published April 2024. What led you to investigate this particular topic? Was there a specific moment or personal experience that convinced you of its importance?
For this paper – in which I focused on the use of large language models in search engines to give direct answers to search queries – there was indeed a specific moment when I started to think about writing it, namely in the beginning of 2023. A few months after the whole ChatGPT hype broke loose, Microsoft announced that it would integrate the GPT4 model into its search engine Bing. And when I read that I was puzzled. I already knew that people increasingly use ChatGPT like a search engine. But to explicitly market large language models as being good for answering search queries was even one step further. As part of my PhD, I engage a lot with language technologies like large language models and therefore I had a sensibility to this topic anyway. So, when I read about GPT-4 being integrated into the Bing search, I sensed that this was a significant thing to write about. I started the paper by thinking about the differences between this new type of search and the traditional online search. I quickly saw that the answers one gets from the search engine differ a lot. After that I thought about whether or not this could be problematic and if so, in which ways. And that’s when I started to draft the paper. This topic is important and timely, because it’s happening right now, and it is changing the knowledge infrastructure within society.
Okay, thanks for answering this. We assume that our readers are familiar with chatbots and search engines but probably not with the term “sealing of knowledges”. Could you explain what you mean by that?
The sealing of knowledges is a term that I introduced to explain what is happening through the integration of chatbots and large language models into search engines. One part of the term addresses the sealing aspect and the other one the knowledge aspect. I’m also referring to a concept which is called sealing of surfaces or “sealed surfaces” which Rainer Mühlhoff introduced. He introduced the concept of “sealed surfaces” to talk about for example the MS Windows folder structure which changed over the years. In older Windows versions you could easily get to deeper levels of the system, whereas in newer versions it becomes more and more “sealed” away from the user. This way it’s more and more difficult to see how the system actually functions (though you can still access it if you put effort into understanding how the system works).
The metaphor of the seal that is difficult to get through, is something that I took to the realm of search engines, where chatbots function as a sort of seal, giving you only one answer to your question. Using Bing you get only one very authoritative sounding plausible few-sentence-answer. Most of the time there are of course thousands of possible answers to a search query. Life is complex. We are complex and by having only this one-answer-output this whole complexity is sealed.
The other part of “sealed knowledges” is “knowledges”, which reference to feminist epistemologies and the author Donna Haraway. In the 80s, Haraway wrote about situated knowledges. Situated knowledges is the idea that knowledge is situated within the person who makes knowledge claims and speaks. If I say something, I say that from the perspective I have. My knowledge therefore is always embodied. It is within my body and through my way of engaging with the outside that I can find certain knowledge. How I see the world, how I interpret certain information, how I tell you something is dependent on who I am and on what experiences I had and have. This knowledge, therefore, is always partial. There’s no one true knowledge, no such thing as general objective knowledge. Even what we question in science depends on who asks certain questions, who receives funding, and who is listened to. So, in this understanding, knowledge is always situated, embodied, and partial.
This is something you don’t see when large language models provide direct answers to queries. You get one answer, assembled as a single output by a language model, which gathers information from many different sources and presents it with an aura of omnipotence, as if it knows everything, and of detachment and impartiality. This is quite dangerous because it makes it easier to forget that knowledge is not that way—knowledge is always partial and situated.
Therefore, I use the term ‘sealed knowledges’ to illustrate this dualism or the two-way change in our knowledge infrastructure through this type of online search. So, when I’m getting an answer from a chatbot, it becomes more difficult to discern what is information, what is knowledge, and how the content was composed. With something like Microsoft’s Copilot (as they termed GPT4 integrated into Bing), you don’t see how the information was assembled—you just get the answer. You don’t receive an explanation of how a language model works, such as through statistics or web crawling, or how the output is generated.
One crucial difference from traditional online searches is that you don’t see different links. That’s the main difference. You don’t need to click on a website anymore. With traditional web search interfaces, you’re used to seeing, say, 10 different links on the first page, and you choose which one to click on, more or less deliberately. There’s an algorithm that structures the order of these websites, and most people click on the links on the first page. Very few go to the second or third pages, so even here it’s pre-structured, and power dynamics are at play. But it’s even more pronounced when you don’t see different options at all. When you just have one answer, you don’t even have the immediate reflection on which link to click on or the possibility that there may be different answers to your question. I think that’s one of the big, crucial differences.”
It’s more difficult to see where these many different bits of information are coming from. I don’t know what the knowledge base is with which the model was trained because that’s a trade secret.
Important is also the situation of information. Here I’m referring to a paper by Bender and Kohler. They write about the serendipity of search. They suggest that when you enter a certain webpage, you might accidentally find surrounding information that could be crucial to your search, even though you weren’t aware of it. And that’s something that gets lost.
Even this reflective moment of questioning what information you want to get and where you think you can find it—like on which webpage—that’s important. It’s also a form of self-questioning during the search process.”
Oh yeah, that makes so much sense
It’s significant—maybe even more than we already discussed. There are more consequences. In a study, Algorithm Watch and AI Forensics, two NGOs that examine how algorithms impact society and democracy,hey looked at Bing Copilot and the output it gave to questions about last year’s elections in Hesse, Bavaria, and Switzerland. They found that the model tended to give incorrect information and even plain misinformation. For instance, it provided wrong names for the main candidates and incorrect information about the parties and their programs.
In traditional search, you might go to the parties’ websites or check various sources of information. But when you rely on this model and trust it to provide true information, it could actually change your voting behavior, which has a huge potential impact on democracy.
The other difference I argue—and I hinted at this earlier—is how it changes the person as a subject engaging in this mode of search. If I learn more and more to not judge the information because I just trust a language model to give me a short, convenient answer, that changes how I approach information. This could lead to a dangerously simplistic understanding of how questions can or should be answered, instead of recognizing that there are always multiple possible answers.
Okay, so let’s imagine the everyday situation of someone wanting to know the 10 most famous philosophers. This person is sitting down and then types a question into a chatbot search engine. What short-term problems could they run into and what kind of long-term problems could we as a society run into if it is not just one person but most of us that would use the chatbots as search engines?
What you can see—and I have a graph of this in my paper, a screenshot—is that when you input a question like ‘Who are the most famous philosophers?’ into Bing Co-pilot, the answer you get is a list of 10 names with very brief explanations about them. I’ve tried this several times, including recently, and the result is always the same: 10 names of 10 major philosophers. All of them are male, dead, and most are from Europe or North America, so there’s a clear bias. These are philosophers you’d expect — Aristotle, Kant, maybe Nietzsche, potentially Heidegger, and perhaps Confucius as the one Asian philosopher. I’ve never received an answer that includes a female philosopher, a living philosopher, or one from Africa or South America.
If someone—let’s call her Anna—uses this search query, it’s easy for her to assume, ‘Oh, these are the 10 most famous philosophers. Great, I have my answer.’ Anna might then look up more information out of interest. When you scroll down in Bing Copilot, you’ll see follow-up search links like ‘Tell me more about Socrates’ or ‘Tell me more about Kant.’ This means that based on the initial search query, you’ll find further information that aligns with it, but nothing that expands beyond it.
One problem is representation. Anna won’t see that there are also famous female philosophers or living philosophers, or philosophers from regions outside the Global North. This can be a problem on an individual level —maybe Anna won’t be encouraged to study philosophy because she doesn’t see any female philosophers.
On a broader scale, it shows that the knowledge presented in such search engines is often dominant and hegemonic — centred around the most recognized voices. Voices from subcultures or marginalized groups are less often heard and therefore less easily found. This is a much bigger problem on a societal level because it distances us from discovering non-hegemonic knowledge.
That being said, it’s not entirely straightforward—dominant voices are also most commonly found in traditional web searches. However, there is a difference. For example, if Anna types in the same question—’Who are 10 famous philosophers?’—in Bing’s traditional web search, she will find several links to different websites. One of these links might be to ‘The 20 Most Famous Philosophers of All Time,’ or lists of famous philosophers sorted by continent, or famous female philosophers. And you might find this even on the first page. That does make a difference because it gives you the feeling, intuition, and knowledge that you can go beyond what’s presented to you with a singular answer.
If you’re used to having this kind of approach where you need to decide what to click on and do some fact-checking, I think, in a way, it helps you question and stay engaged with the world. Within certain power dynamics, but still. As I already mentioned, websites are also structured and restructured in a certain way, but we still maintain a subjectivity where it’s normal to click on various links, scroll down, and read different headings. This approach allows us to entertain the idea that there may be different ways of thinking about the ’10 most famous philosophers.’
You are more likely to consider that there could be female philosophers, living philosophers, or very famous philosophers from Bolivia, for example. The act of questioning the output of the algorithm is, I think, very crucial. And that is something I fear may get lost through this kind of sealing of knowledge.
Let’s talk a bit more about the chatbot user, because when Google and its competitors designed chatbots and these new types of search engines, they must have had a specific image of the user they were creating their product for. How did they conceptualize their users and why could this be problematic?
Yeah, this ties back to what I just mentioned. There’s a very illuminating, but also, in a sense, very dark, research paper by Metzler et al., who were Google researchers. They advocated for this mode of search, arguing that “we” need direct answers to search queries. They explicitly write about why they think this way, and it’s very interesting to read. The image of the user they present is someone who wants to get an answer from an algorithm or a web search as quickly as possible. In their view, users don’t want to put any effort or time into searching.
They see traditional online search as requiring users to invest time and cognitive work, which they describe as a ‘cognitive burden’—a very strong term that evokes the image of a poor user burdened by the effort of seeking information and knowledge. They argue that with language models, search is much more convenient and time-saving — and with that exactly what their envisioned users want.
So, in their view, users are very passive. They want convenience, a quick and easy answer, and they don’t want to think beyond that answer. They just want an authoritative response without investing any cognitive labour or thought into the search process.
This sounds dystopian. How can we as individuals and as a society resist the sealing of knowledges? And what kinds of people do we need to be not to become the user Silicon Valley envisioned us to be?
I think one approach is to remain doubtful and to question the outputs of these algorithms. We should think about not just what is shown in the answer, but also what is not shown. This critical approach means that we do need to be aware that answers—and the answer space—are always complex. Life is complex, and that’s okay. The answers we see are often very one-dimensional and frequently miss non-dominant and non-heteronormative voices. It’s important to question this and pose further questions.
We also need to foster digital literacy skills, such as fact-checking information. As I mentioned, these models might provide blatant misinformation about something as important as voting-related information. It’s also important to recognize the power dynamics involved in search. For instance, who among us still goes to the library to find information, looks in books, or consults encyclopaedias? That’s not how we operate anymore, and I’m not criticizing this—it’s just how our society functions now. We all have smartphones, and online search has become one of the most crucial aspects of our societal knowledge infrastructure.
The power we give to search engines and search engine companies is immense because they structure, with their opaque algorithms, how we come to know things. It’s important to understand this. We need to consider how we can break this ‘seal’ that’s occurring and also regain a more partial and situated understanding of knowledge. This involves recognizing that answers aren’t just ‘out there,’ but are specific portrayals of the world.
Thank you very much for that answer. I’m really curious about how you approach these problems in your day-to-day research. Considering the topic of convenience, I think about my own experience in the library. To be honest, I won’t look at every book. I might look at 10 and then think, ‘Oh God, I just need to choose one.’ I’m lazy; I don’t want to go through them all. So, it feels like the problem of convenience has always existed—it’s just evolving. There’s the theoretical solution and then there’s the practical reality. How do you navigate all of that? What’s your position?
Are you asking about the balance between what’s practical versus what’s theoretically good?
Exactly. In theory, you could say, ‘ I choose books at random” or decide to take 20 and create a some formal selection system,’ but in reality, I might only have three or five minutes. Similarly, with search engines, you won’t scroll to the 20th page; I’ll stay on the first one. Some solutions just aren’t realistic.
Yes, I agree. It’s part of the system we operate within—we have limited time. As I mentioned, web searches are pre-structured by algorithms, often influenced by the financial interests of search companies. I’m not suggesting there’s one perfect solution to this dilemma; it’s a difficult issue. However, I think it’s important not to mistake convenience and time-saving as the ultimate goal. That’s something we can and should question. Should convenience be our highest priority? Doesn’t this take away something important from us as a society?
That doesn’t mean we need to read 10 books because, realistically, we won’t. You still choose which book to read based on certain criteria that matter to you—like who the author is, what perspective they bring, and whether you agree with it or not. You might also choose based on the specific angle the book takes on your question. So, even in a limited time, you make a choice. I believe that making a choice when considering a source of information is crucial. It doesn’t take ages, but it is a decision you make as a critical, self-aware individual.”
Do you think that, for example, labels or categories within search engines could help? Like having the option to specify that you need to find something quickly, or that it needs to be from a particular area of the world. Or maybe you’re looking at a specific gender perspective. If we start providing information with certain attributes, it might help us be more aware of the framework we’re working within.
First of all I think having a better understanding of how these systems actually work would be very helpful, and most users don’t even have that. While tweaking the system in that way might be beneficial, you would still be getting a very specific answer space. And you wouldn’t necessarily see the full range of options. To even ask a question like that, you need to be very reflective—you need to already know that it’s possible and be aware that it makes a difference whether you ask for perspectives from, say, Kenya, or non-binary perspectives, or something else.
Okay, thanks you very much for that. We’re actually nearing the end of the official part. Is there anything else we haven’t touched on that you’d like to highlight or discuss?”
Haha, the typical end-of-interview question. 🙂
Haha, yeah, sorry!
Something that’s generally important to think about is the impact of your own work. For example, at the Institute, the models we build may eventually be released into the world. Same if you decide to work for an AI company: consider the impact your models might have—especially on marginalized groups and marginalized knowledge. It’s crucial to think about how these technologies affect both the users and those who may become invisible or hyper-visible as a result. Maintaining a reflective mindset and recognizing the importance of this reflection, even in stressful job situations, is essential. We need to think about our responsibility in building technology, whether it’s AI or other types of technology. It all impacts how we live together as a society.”
Great closing remarks, this is really important advice that we should all consider. Thank you so much for being here and taking the time to answer all these questions!
You’re welcome!

The Interview was conducted by Jila Petsch and Madlen Peters.